modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
HomayounSadri/bert-base-uncased-finetuned-squad-v2 | HomayounSadri | "2022-05-05T19:18:27Z" | 1,734 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-05-05T15:37:29Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: HomayounSadri/bert-base-uncased-finetuned-squad-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HomayounSadri/bert-base-uncased-finetuned-squad-v2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8470
- Validation Loss: 1.0267
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.3879 | 1.0715 | 0 |
| 0.8470 | 1.0267 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
digiplay/fishmix_other_v1 | digiplay | "2023-11-02T13:04:59Z" | 1,734 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-07-06T19:04:48Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
23-3-9-实验-咸鱼mix风格化 ——fish mix the other Style
https://civitai.com/models/17565/23-3-9-mix-fish-mix-the-other-style
Original Author's DEMO image :

Sample image I made : (using huggingface API)
image prompt + ***realistic*** keywords

image prompt ***with no realistic*** keywords

photorealism (8k UHD RAW,photorealistic,realistic:1.6) ,golden medium hair beautiful girl

|
frank098/orca_mini_3b_juniper | frank098 | "2023-07-12T03:40:35Z" | 1,734 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-07-12T00:11:15Z" | Entry not found |
QuantFactory/Nxcode-CQ-7B-orpo-GGUF | QuantFactory | "2024-06-08T11:20:27Z" | 1,734 | 1 | null | [
"gguf",
"code",
"text-generation",
"arxiv:2403.07691",
"base_model:NTQAI/Nxcode-CQ-7B-orpo",
"license:other",
"region:us"
] | text-generation | "2024-06-08T07:51:44Z" | ---
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE
tags:
- code
pipeline_tag: text-generation
license: other
base_model: NTQAI/Nxcode-CQ-7B-orpo
---
# QuantFactory/Nxcode-CQ-7B-orpo-GGUF
This is quantized version of [NTQAI/Nxcode-CQ-7B-orpo](https://huggingface.co/NTQAI/Nxcode-CQ-7B-orpo) created suing llama.cpp
## Model Description
Nxcode-CQ-7B-orpo is an [Monolithic Preference Optimization without Reference Model](https://arxiv.org/abs/2403.07691) fine-tune of Qwen/CodeQwen1.5-7B on 100k samples of high-quality ranking data.
## [Evalplus](https://github.com/evalplus/evalplus)
| EvalPlus | pass@1 |
| --- | --- |
| HumanEval | 86.6 |
| HumanEval+ | 83.5 |
| MBPP(v0.2.0) | 82.3 |
| MBPP+(v0.2.0) | 70.4 |
We use a simple template to generate the solution for evalplus:
```python
"Complete the following Python function:\n{prompt}"
```
[Evalplus Leaderboard](https://evalplus.github.io/leaderboard.html)
| Models | HumanEval | HumanEval+|
|------ | ------ | ------ |
| GPT-4-Turbo (April 2024)| 90.2| 86.6|
| GPT-4 (May 2023)| 88.4| 81.17|
| GPT-4-Turbo (Nov 2023)| 85.4| 79.3|
| CodeQwen1.5-7B-Chat| 83.5| 78.7|
| claude-3-opus (Mar 2024)| 82.9| 76.8|
| DeepSeek-Coder-33B-instruct| 81.1| 75.0|
| WizardCoder-33B-V1.1| 79.9| 73.2|
| OpenCodeInterpreter-DS-33B| 79.3| 73.8|
| speechless-codellama-34B-v2.0| 77.4| 72|
| GPT-3.5-Turbo (Nov 2023)| 76.8| 70.7|
| Llama3-70B-instruct| 76.2| 70.7|
## Bigcode Leaderboard
[Bigcode Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
**09/05/2024**
Top 1 average score.
Top 2 winrate.

## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. You should upgrade the transformers if you receive an error when loading the tokenizer
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"NTQAI/Nxcode-CQ-7B-orpo",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("NTQAI/Nxcode-CQ-7B-orpo")
prompt = """Complete the following Python function:
from typing import List
def has_close_elements(numbers: List[float], threshold: float) -> bool:
""" Check if in given list of numbers, are any two numbers closer to each other than
given threshold.
>>> has_close_elements([1.0, 2.0, 3.0], 0.5)
False
>>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)
True
"""
"""
messages = [
{"role": "user", "content": prompt}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
res = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
``` |
MaziyarPanahi/TopEvolution-GGUF | MaziyarPanahi | "2024-06-15T17:44:04Z" | 1,734 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:mergekit-community/mergekit-slerp-ebgdloh",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/TopEvolution"
] | text-generation | "2024-06-15T17:22:45Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- base_model:NousResearch/Hermes-2-Pro-Mistral-7B
- base_model:mergekit-community/mergekit-slerp-ebgdloh
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: TopEvolution-GGUF
base_model: mergekit-community/TopEvolution
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/TopEvolution-GGUF](https://huggingface.co/MaziyarPanahi/TopEvolution-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/TopEvolution](https://huggingface.co/mergekit-community/TopEvolution)
## Description
[MaziyarPanahi/TopEvolution-GGUF](https://huggingface.co/MaziyarPanahi/TopEvolution-GGUF) contains GGUF format model files for [mergekit-community/TopEvolution](https://huggingface.co/mergekit-community/TopEvolution).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
MaziyarPanahi/mergekit-slerp-jfynlwk-GGUF | MaziyarPanahi | "2024-06-18T05:13:07Z" | 1,734 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:WizardLM/WizardMath-7B-V1.1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-jfynlwk"
] | text-generation | "2024-06-18T04:50:45Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- conversational
- base_model:NousResearch/Hermes-2-Pro-Mistral-7B
- base_model:WizardLM/WizardMath-7B-V1.1
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-jfynlwk-GGUF
base_model: mergekit-community/mergekit-slerp-jfynlwk
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-jfynlwk-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-jfynlwk-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-jfynlwk](https://huggingface.co/mergekit-community/mergekit-slerp-jfynlwk)
## Description
[MaziyarPanahi/mergekit-slerp-jfynlwk-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-jfynlwk-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-jfynlwk](https://huggingface.co/mergekit-community/mergekit-slerp-jfynlwk).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
klandtech/name_list_gguf | klandtech | "2024-06-22T06:42:48Z" | 1,734 | 0 | null | [
"gguf",
"license:mit",
"region:us"
] | null | "2024-06-22T06:24:22Z" | ---
license: mit
---
|
MaziyarPanahi/mergekit-slerp-utpztvu-GGUF | MaziyarPanahi | "2024-06-17T02:22:44Z" | 1,733 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-utpztvu"
] | text-generation | "2024-06-17T01:56:27Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- conversational
- base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02
- base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-utpztvu-GGUF
base_model: mergekit-community/mergekit-slerp-utpztvu
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-utpztvu-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-utpztvu-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-utpztvu](https://huggingface.co/mergekit-community/mergekit-slerp-utpztvu)
## Description
[MaziyarPanahi/mergekit-slerp-utpztvu-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-utpztvu-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-utpztvu](https://huggingface.co/mergekit-community/mergekit-slerp-utpztvu).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
google-bert/bert-base-german-dbmdz-cased | google-bert | "2024-02-19T11:03:54Z" | 1,732 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:04Z" | ---
language: de
license: mit
---
This model is the same as [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased). See the [dbmdz/bert-base-german-cased model card](https://huggingface.co/dbmdz/bert-base-german-cased) for details on the model. |
Helsinki-NLP/opus-mt-eu-en | Helsinki-NLP | "2023-08-16T11:34:04Z" | 1,732 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eu",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-eu-en
* source languages: eu
* target languages: en
* OPUS readme: [eu-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eu-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/eu-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eu-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eu-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eu.en | 46.1 | 0.638 |
|
timm/convnext_base.fb_in1k | timm | "2024-02-10T23:26:53Z" | 1,732 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-13T07:06:24Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for convnext_base.fb_in1k
A ConvNeXt image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 88.6
- GMACs: 15.4
- Activations (M): 28.8
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/facebookresearch/ConvNeXt
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_base.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_base.fb_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_base.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
lvkaokao/llama2-7b-hf-instruction-lora | lvkaokao | "2023-08-08T05:37:33Z" | 1,732 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-08T04:44:07Z" | Entry not found |
Chrisisis/5HpAerjLUQgL9yNFozC4NZf7gXPX1ZR3C4t9aWQsonzHQn7r_vgg | Chrisisis | "2024-02-24T08:24:59Z" | 1,732 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-02-05T18:31:43Z" | Entry not found |
backyardai/WestLake-10.7B-v2-GGUF | backyardai | "2024-05-22T22:26:36Z" | 1,732 | 1 | transformers | [
"transformers",
"gguf",
"roleplay",
"text-generation-inference",
"text-generation",
"en",
"base_model:froggeric/WestLake-10.7B-v2",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-20T16:52:07Z" | ---
base_model: froggeric/WestLake-10.7B-v2
license: other
language:
- en
library_name: transformers
pipeline_tag: text-generation
quantized_by: brooketh
tags:
- roleplay
- text-generation-inference
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# WestLake v.2 10.7B
- **Creator:** [froggeric](https://huggingface.co/froggeric/)
- **Original:** [WestLake v.2 10.7B](https://huggingface.co/froggeric/WestLake-10.7B-v2)
- **Date Created:** 3/11/2024
- **Trained Context:** 8192 tokens
- **Description:** Self-merge of WestLake 7B. Excels at understanding nuances in language and producing creative outputs.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
*** |
DavidAU/DarkSapling-V1-Ultra-Quality-7B-GGUF | DavidAU | "2024-06-11T01:04:40Z" | 1,732 | 0 | null | [
"gguf",
"creative",
"story",
"roleplay",
"rp",
"32 bit upscale",
"remastered",
"writing",
"en",
"license:apache-2.0",
"region:us"
] | null | "2024-06-03T11:48:35Z" | ---
license: apache-2.0
language:
- en
tags:
- creative
- story
- roleplay
- rp
- 32 bit upscale
- remastered
- writing
---
<h3><font color="red"> Dark Sapling V1 7B - 32k Context - Ultra Quality - 32 bit upscale.</font></h3>
<img src="dark-sapling.jpg" style="width:300px; height:300px; float:right; padding:10px;">
Complete remerge, and remaster of the incredible Dark Sapling V1 7B - 32k Context from source files.
Registering an impressive drop of 1000 points (lower is better) at Q4KM.
This puts "Q4KM" operating at "Q6" levels, and further elevates Q6 and Q8 as well.
Likewise, even Q2K (smallest quant) will operate at much higher levels than it's original source counterpart.
<B>RESULTS:</b>
The result is superior performance in instruction following, reasoning, depth, nuance and emotion.
Reduction in prompt size, as it understands nuance better.
And as a side effect more context available for output due to reduction in prompt size.
Note that there will be an outsized difference between quants especially for creative and/or "no right answer" use cases.
Because of this it is suggested to download the highest quant you can operate, and it's closest neighbours so to speak.
IE: Q4KS, Q4KM, Q5KS as an example.
Imatrix Plus versions to be uploaded at a separate repo shortly.
Special thanks to "TEEZEE" the original model creator:
[ https://huggingface.co/TeeZee/DarkSapling-7B-v1.0 ]
NOTE: Version 1.1 and Version 2 are also remastered. |
raaec/Phi-3-mini-4k-instruct-introvert | raaec | "2024-06-30T18:24:19Z" | 1,732 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"nlp",
"code",
"conversational",
"custom_code",
"multilingual",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-29T01:56:14Z" | ---
language:
- multilingual
pipeline_tag: text-generation
tags:
- nlp
- code
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to prepare ramen dishes?
---
# Model Card for Model ID
## Overview:
raaec/Phi-3-mini-4k-instruct-introvert is a language model that exhibits introverted behavior, using orthogonalization to ablate extroverted tendencies.
## !! When using the model make sure to use `tokenizer = AutoTokenizer.from_pretrained("microsft/Phi-3-mini-4k-instruct")`
## Methodology:
### Base Model: microsoft/Phi-3-medium-4k-instruct
### Orthogonalization: Applied to ablate extroverted behaviors.
### Ablation Technique: Utilizes minimal data to inhibit refusal and enhance introversion without altering other behaviors.
### Purpose:
This model is ideal for applications requiring concise, reserved responses.(sometimes a bit funny) |
GroNLP/gpt2-small-italian | GroNLP | "2023-09-11T08:57:44Z" | 1,731 | 7 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"gpt2",
"text-generation",
"adaption",
"recycled",
"gpt2-small",
"it",
"arxiv:2012.05628",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-03-02T23:29:04Z" | ---
language: it
tags:
- adaption
- recycled
- gpt2-small
pipeline_tag: text-generation
---
# GPT-2 recycled for Italian (small)
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model.
For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle).
## Related models
### Dutch
- [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings.
### Italian
- [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings.
- [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**)
- [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings.
## How to use
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="GroNLP/gpt2-small-italian")
```
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-italian")
model = AutoModel.from_pretrained("GroNLP/gpt2-small-italian") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-italian") # Tensorflow
```
## BibTeX entry
```bibtex
@misc{devries2020good,
title={As good as new. How to successfully recycle English GPT-2 to make models for other languages},
author={Wietse de Vries and Malvina Nissim},
year={2020},
eprint={2012.05628},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
johnslegers/epic-diffusion-v1.1 | johnslegers | "2023-01-21T06:08:01Z" | 1,731 | 47 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-01-21T01:27:22Z" | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
---
[![Example][1]][1]
## Why Epic Diffusion
Epîc Diffusion is a general purpose model based on Stable Diffusion 1.x intended to replace the official SD releases
as your default model. It is focused on providing high quality output in a wide range of different styles, with support
for NFSW content.
Epîc Diffusion 1.1 is a heavily calibrated merge of SD 1.4, SD 1.5, Analog Diffusion, Wavy Diffusion, Redshift Diffusion,
Openjourney Diffusion, Samdoesarts Ultramerge, Elldreth's Dream, postapocalypse, Inkpunk Diffusion, Ghibli Diffusion, Mo Di Diffusion,
Archer Diffusion, Classic Animation Diffusion, Arcane Diffusion, Van Gogh Diffusion, 3DKX, HASDX, Flexible Diffusion, Cinematic Diffusion,
Shady Art, dvMJv4, dvAuto & mj-v4-look + some dreambooth trained models of my own, blended and reblended multiple times until I got
the quality & consistency I was looking for
Epic Diffusion is also [available on CivitAI](https://civitai.com/models/3855/epic-diffusion).
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use
them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
<a href="https://www.buymeacoffee.com/johnslegers" target="_blank">
<img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 45px !important;width: 162px !important;" >
</a>
## Example prompts
<table>
<tr style="border: 1px solid;background:#e5e7eb">
<th style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Prompt
</th>
<th style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Parameters
</th>
<th style="vertical-align:top;padding:.5714286em!important;border: 1px solid;min-width:270px">
Output
</th>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
scarlett johansson, in the style of Wes Anderson, highly detailed, unreal engine, octane render, 8k
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>2263657329<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/O4jXU.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
sansa angeline jolie gessica chastain mummy, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha and william - adolphe bouguereau
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>1310341382<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/JScKL.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Pokimane, Feminine, Mercy, Perfect Sexy Symmetrical Face, Detailed Pupils, Pensive Smirk, Look at Viewer, Leaf Armor, Ilya Kuvshinov, Gil Elvgren, Mucha. Intricate, Octane Render, 4KUHD, Centered, Oil Painting, Bokeh, Rim Lighting.
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>4142902194<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/rLqHN.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Mature babe,artgerm Style, gerald brom, atey ghailan, mike mignola, short cut off shirt knot, wide hips, showing off, exposing herself vulnerable, blushing, exited, confident, demanding, joyful, trending on artstation, double split complementary colors, intricate details, highly detailed,
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3954688283<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/eufe5.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
planet base, windows, night, ground level, no man's sky, digital art, highly detailed, intricate, sharp focus, Trending on Artstation HQ, deviantart, unreal engine 5, 4K UHD image
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>895811336<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/XbfYV.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
berchtesgaden, hyperdetailed, detailed faces, artgerm, wolfenstein, portal 2, Leartes Studios, assassin's creed, alphonse mucha, bouguereau, edmund blair leighton, greg kadel, dynamic lighting, delicate, unreal engine, octane render, 8k
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>1172925287<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/HMZVA.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
princess, detailed portrait, hyperdetailed, detailed faces, irakli nadar, magali villeneuve, Assassin's Creed, Tim Hildebrandt, Ilya Kuvshinov, artgem, greg kadel, dynamic lighting, delicate, unreal engine, octane render, 8k
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>2096567313<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/EqPBr.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
a Photorealistic dramatic hyperrealistic bright blue eyes, African American elegant girl, black hair, white veil,by WLOP,Artgerm,Greg Rutkowski,Alphonse Mucha, Beautiful dynamic dramatic bright sunset lighting,shadows,cinematic atmosphere,Artstation,concept design art,Octane render,8k
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>2999946689<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/1nn2e.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
cutest girl in the world outside, (detailed portrait), in the style of fernanda suarez and simon stalenhag and Ilya Kuvshinov and Wlop and Artgerm and Chie Yoshii and Greg Rutkowski and Waking Life, trending on artstation, featured on pixiv, dynamic lighting, highly detailed, ambient lighting, octane render, 8k
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>2249388004<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/MfLZS.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
military academy, (detailed portrait), steampunk, in the style of arcane and fernanda suarez and dishonored and bioshock and simon stalenhag and Ilya Kuvshinov and Wlop and Artgerm, trending on artstation, featured on pixiv, dynamic lighting, highly detailed, ambient lighting, octane render, 8k
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3877530043<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/BvA3s.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
beautiful female assassin wearing cyberpunk clothing, respirator, cybernetic respirator, (detailed portrait), cell shaded, 4 k, vivid colours, photorealistic concept art by wlop, ilya kuvshinov, artgerm, krenz cushart, greg rutkowski, pixiv. cinematic dramatic atmosphere, sharp focus, volumetric lighting, cinematic lighting, studio quality
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3388890157<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/KUm9A.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
cemetary, pen and ink, in the style of gustave dore highly detailed, octane render, 8k, trending on artstation, sharp focus, studio photo, intricate details, highly detailed, by greg rutkowski
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>568457114<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/90mH1.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
dubai, hyperdetailed, detailed faces, artgem, irakli nadar, mass effect, Tim Hildebrandt, Ilya Kuvshinov, liam wong, greg rutkowski, greg kadel, dynamic lighting, delicate, unreal engine, octane render, 8k, centered, symmetry, painted, intricate, volumetric lighting, beautiful, rich deep colors masterpiece, sharp focus, ultra detailed, in the style of dan mumford and marc simonetti, astrophotography
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>DPM++ SDE<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>4262868463<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/7TjmX.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Little cute forest fluffy chibi cuteness overload, sunny magical background, ultra precious details, intricate details, volumetric lighting, photo realistic, lifelike, photography, digital art, 8k, trending on artstation, sharp focus, studio photo, intricate details, highly detailed, by greg rutkowski, sharp focus, emitting diodes, smoke, artillery, sparks, racks, system unit, motherboard, by pascal blanche rutkowski repin artstation hyperrealism painting concept art of detailed character design matte painting, 4 k resolution blade runner
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>DPM++ SDE Karras<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3849507891<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/skddc.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
15 year old schoolgirl with short straight hair, blue eyes, cute, friendly, round face, cottagecore, intricate, enlightened, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>2276800560<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/L0kVH.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
extreme wide shot a futuristic containment building in a rainforest valley with a city in the distance, national geographic, hyper realistic, 4 k, harsh light
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3260458902<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/p66dH.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
portrait of a middle - eastern female cleric with straight black hair wearing blue and yellow vestments casting fireball, fantasy, highly detailed, digital painting, artstation, concept art, character art, art by greg rutkowski and tyler jacobson and alphonse mucha
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>1379894453<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/FBZuT.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
aSnowshoe Siamese Cat as the doomslayer, realistic scifi cyberpunk power armor robot, closeup portrait art by donato giancola and greg rutkowski, vintage retro scifi, realistic face, digital art, trending on artstation, symmetry
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>2122325442<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/ZjX2f.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Beautiful boy by René Magritte
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>1753689226<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/bgvsg.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
portrait of a dark god, copper wires, visible scars and nerves, intricate, headshot, highly detailed, digital painting, artstation, concept art, sharp focus, cinematic lighting, illustration, art by artgerm and greg rutkowski, alphonse mocha, cgsociety, Olivia
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3355776798<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/8yx4N.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
knight warrior helmet skyrim mask elder scrolls v nordic armor bethesda adam adamowicz illustration character design concept, unreal 5, daz, hyperrealistic, octane render, cosplay, rpg portrait, dynamic lighting, intricate detail, harvest fall vibrancy, cinematic volume inner glowing aura global illumination ray tracing hdr
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>1938574287<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/dY65d.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
berserker portrait, d&d style, fantasy, photorealistic, highly detailed, artstation, smooth, sharp focus, art by michael whelan, artgerm, greg rutkowski and alphonse mucha
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>DPM++ SDE Karras<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>156077154<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/76jz5.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
symmetry product render poster vivid colors classical proportion car, glowing fog intricate, elegant, highly detailed, digital painting, art station, concept art, smooth, sharp focus, illustration,
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>DPM++ SDE Karras<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>4294525772<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/f4jll.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Futuristic Vintage Medium Shot 1920's Poster with Cyberpunk, ovni, tron biker with helmet bike, black in color, with a cyberpunk city background, futuristic lighting, cinematic lighting, cozy lighting, 8k, cinematic poster vintage 1800s
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>1229558409<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/6N6kr.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
beautiful, young woman, cybernetic, cyberpunk, detailed gorgeous face, flowing hair, vaporwave aesthetic, synthwave , digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>264509871<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/IDgVX.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
strong warrior princess| centered| key visual| intricate| highly detailed| breathtaking beauty| precise lineart| vibrant| comprehensive cinematic| Carne Griffiths| Conrad Roset
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>16<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/oTVxB.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
portrait of a rugged 19th century man with mutton chops in a jacket, victorian, concept art, detailed face, fantasy, close up face, highly detailed, cinematic lighting, digital art painting by greg rutkowski
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>16<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/vKamr.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
side profile of cyberpunk body with cyborg skull | cyberpunk | styled in Art Nouveau | insanely detailed | embellishments | high definition | concept art | digital art | vibrant
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>16<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/fkxPX.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
a cute little matte low poly isometric cherry blossom forest island, pink waterfalls, mist, lat lighting, soft shadows, trending on artstation, 3d render, monument valley, fez video game,
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>16<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/M2PAq.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
high resolution concept art of an apartment living room overlooking a large futuristic city with floor to ceiling windows and mid century modern furniture cinematic lighting cgsociety
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>850995814<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/F6GMQ.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
hyperrealistic full length portrait of gorgeous watson from apex legends | blonde | detailed gorgeous face!! | full body!! | armor | intricate | elegant | realistic | hyperrealistic | cinematic | character design | concept art | highly detailed | illustration | digital art | digital painting | depth of field | illustrated by tim brown lee
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3002798343<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/nDe6M.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
Chibi spiderman, high redolution, 3D rendering, octane rendering, modern Disney style
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>20<br>
<b>Sampler:</b><br>Euler a<br>
<b>CFG scale:</b><br>7<br>
<b>Seed:</b><br>3232863832<br>
<b>Size:</b><br>512x512
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/ixo6D.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
photo of the most beautiful artwork in the world featuring soft lustrous, industrial mechanic real world, fantastic location, working environment, rugged harsh situation worker, full body 8k unity render, action shot, skin pores, detailed intricate iris, very dark lighting, heavy shadows, detailed, detailed face, (vibrant, photo realistic, realistic, dramatic, dark, sharp focus, 8k), (weathered greasy dirty damaged old worn technician worker outfit:1.1), (intricate:1.1), (highly detailed:1.1), digital painting, octane render, artstation, concept art, smooth, sharp focus, illustration, art by artgerm, (loish:0.23), wlop ilya kuvshinov., (global illumination, studio light, volumetric light)<br><br>
<b>Negative prompt:</b> Asian, black and white, close up, cartoon, 3d, denim, (disfigured), (deformed), (poorly drawn), (extra limbs), blurry, boring, sketch, lackluster, signature, letters, watermark, low res , horrific , mutated , artifacts , bad art , gross , b&w , poor quality , low quality , cropped
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>30<br>
<b>Sampler:</b><br>DPM++ SDE Karras<br>
<b>CFG scale:</b><br>10<br>
<b>Seed:</b><br>169686802<br>
<b>Size:</b><br>512x640
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/1vx2U.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
photo of the most beautiful artwork in the world featuring soft lustrous, industrial mechanic real world, fantastic location, working environment, rugged harsh situation worker, full body 8k unity render, action shot, skin pores, detailed intricate iris, very dark lighting, heavy shadows, detailed, detailed face, (vibrant, photo realistic, realistic, dramatic, dark, sharp focus, 8k), (weathered greasy dirty damaged old worn technician worker outfit:1.1), (intricate:1.1), (highly detailed:1.1), digital painting, octane render, artstation, concept art, smooth, sharp focus, illustration, art by artgerm, (loish:0.23), wlop ilya kuvshinov., (global illumination, studio light, volumetric light)<br><br>
<b>Negative prompt:</b> Asian, black and white, close up, cartoon, 3d, denim, (disfigured), (deformed), (poorly drawn), (extra limbs), blurry, boring, sketch, lackluster, signature, letters, watermark, low res , horrific , mutated , artifacts , bad art , gross , b&w , poor quality , low quality , cropped
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>30<br>
<b>Sampler:</b><br>DPM++ SDE Karras<br>
<b>CFG scale:</b><br>10<br>
<b>Seed:</b><br>169686796<br>
<b>Size:</b><br>512x640<br>
<b>Denoising strength:</b><br>0.7<br>
<b>Hires upscale:</b><br>2<br>
<b>Hires upscaler:</b><br>Latent
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.imgur.com/AC1xKup.png">
</td>
</tr>
<tr>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
dark and gloomy full body 8k unity render, female teen cyborg, Blue yonder hair, wearing broken battle armor, at cluttered and messy shack , action shot, tattered torn shirt, porcelain cracked skin, skin pores, detailed intricate iris, very dark lighting, heavy shadows, detailed, detailed face, (vibrant, photo realistic, realistic, dramatic, dark, sharp focus, 8k)<br><br>
<b>Negative prompt:</b> nude, Asian, black and white, close up, cartoon, 3d, denim, (disfigured), (deformed), (poorly drawn), (extra limbs), blurry, boring, sketch, lackluster, signature, letters, watermark, low res , horrific , mutated , artifacts , bad art , gross , b&w , poor quality , low quality , cropped
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<b>Steps:</b><br>26<br>
<b>Sampler:</b><br>DPM++ SDE Karras<br>
<b>CFG scale:</b><br>7.5<br>
<b>Seed:</b><br>2388736888<br>
<b>Size:</b><br>768x1024
</td>
<td style="vertical-align:top;padding:.5714286em!important;border: 1px solid">
<img style="vertical-align:top;margin:0;padding:0" src="https://i.stack.imgur.com/0AcN7.jpg">
</td>
</tr>
</table>
[1]: https://i.stack.imgur.com/p9mFM.jpg |
llm-jp/llm-jp-13b-instruct-full-jaster-v1.0 | llm-jp | "2023-10-20T08:16:34Z" | 1,731 | 14 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-18T13:59:09Z" | ---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
library_name: transformers
pipeline_tag: text-generation
inference: false
---
# llm-jp-13b-instruct-full-jaster-v1.0
This repository provides large language models developed by [LLM-jp](https://llm-jp.nii.ac.jp/), a collaborative project launched in Japan.
| Model Variant |
| :--- |
|**Instruction models**|
| [llm-jp-13b-instruct-full-jaster-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-v1.0) |
| [llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0) |
| [llm-jp-13b-instruct-full-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-dolly-oasst-v1.0) |
| [llm-jp-13b-instruct-lora-jaster-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-jaster-v1.0) |
| [llm-jp-13b-instruct-lora-jaster-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-jaster-dolly-oasst-v1.0) |
| [llm-jp-13b-instruct-lora-dolly-oasst-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-lora-dolly-oasst-v1.0) |
| |
| :--- |
|**Pre-trained models**|
| [llm-jp-13b-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-v1.0) |
| [llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) |
Checkpoints format: Hugging Face Transformers (Megatron-DeepSpeed format models are available [here](https://huggingface.co/llm-jp/llm-jp-13b-v1.0-mdsfmt))
## Required Libraries and Their Versions
- torch>=2.0.0
- transformers>=4.34.0
- tokenizers>=0.14.0
- accelerate==0.23.0
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-instruct-full-jaster-v1.0")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-instruct-full-jaster-v1.0", device_map="auto", torch_dtype=torch.float16)
text = "自然言語処理とは何か"
text = text + "### 回答:"
tokenized_input = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 300B
|Model|Params|Layers|Hidden size|Heads|Context length|
|:---:|:---:|:---:|:---:|:---:|:---:|
|13b model|13b|40|5120|40|2048|
|1.3b model|1.3b|24|2048|16|2048|
## Training
- **Pre-training:**
- **Hardware:** 96 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/))
- **Software:** Megatron-DeepSpeed
- **Instruction tuning:**
- **Hardware:** 8 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/))
- **Software:** [TRL](https://github.com/huggingface/trl), [PEFT](https://github.com/huggingface/peft), and [DeepSpeed](https://github.com/microsoft/DeepSpeed)
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure.
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
- **Training algorithm:** SentencePiece Unigram byte-fallback
- **Training data:** A subset of the datasets for model pre-training
- **Vocabulary size:** 50,570 (mixed vocabulary of Japanese, English, and source code)
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---:|:---:|:---:|
|Japanese|[Wikipedia](https://huggingface.co/datasets/wikipedia)|1.5B
||[mC4](https://huggingface.co/datasets/mc4)|136B
|English|[Wikipedia](https://huggingface.co/datasets/wikipedia)|5B
||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|135B
|Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
The pre-training was continuously conducted using a total of 10 folds of non-overlapping data, each consisting of approximately 27-28B tokens.
We finalized the pre-training with additional (potentially) high-quality 27B tokens data obtained from the identical source datasets listed above used for the 10-fold data.
### Instruction tuning
The models have been fine-tuned on the following datasets.
| Language | Dataset | description |
|:---|:---:|:---:|
|Japanese|[jaster](https://github.com/llm-jp/llm-jp-eval)| An automatically transformed data from the existing Japanese NLP datasets |
||[databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)| A translated one by DeepL in LLM-jp |
||[OpenAssistant Conversations Dataset](https://huggingface.co/datasets/OpenAssistant/oasst1)| A translated one by DeepL in LLM-jp |
## Evaluation
You can view the evaluation results of several LLMs on this [leaderboard](http://wandb.me/llm-jp-leaderboard). We used [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) for the evaluation.
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Hirokazu Kiyomaru, Hiroshi Matsuda, Jun Suzuki, Namgi Han, Saku Sugawara, Shota Sasaki, Shuhei Kurita, Taishi Nakamura, Takumi Okamoto.
|
MaziyarPanahi/mergekit-slerp-kxeioog-GGUF | MaziyarPanahi | "2024-06-18T02:54:58Z" | 1,731 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-kxeioog"
] | text-generation | "2024-06-18T02:32:27Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- conversational
- base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02
- base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-kxeioog-GGUF
base_model: mergekit-community/mergekit-slerp-kxeioog
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-kxeioog-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-kxeioog-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-kxeioog](https://huggingface.co/mergekit-community/mergekit-slerp-kxeioog)
## Description
[MaziyarPanahi/mergekit-slerp-kxeioog-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-kxeioog-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-kxeioog](https://huggingface.co/mergekit-community/mergekit-slerp-kxeioog).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
laion/CoCa-ViT-B-32-laion2B-s13B-b90k | laion | "2023-01-29T21:19:06Z" | 1,730 | 6 | open_clip | [
"open_clip",
"license:mit",
"region:us"
] | null | "2022-12-20T22:27:17Z" | ---
license: mit
---
|
jondurbin/airoboros-gpt-3.5-turbo-100k-7b | jondurbin | "2023-06-26T09:57:03Z" | 1,730 | 18 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-12T11:24:18Z" | ---
license: cc-by-nc-4.0
---
## This repo is old, and uses an old prompt format, you may want to look at [airoboros-13b](https://huggingface.com/jondurbin/airoboros-13b)
## airoboros-gpt-3.5-turbo-100k-7b
This is a 7b parameter, fine-tuned on 100k synthetic instruction/response pairs generated by gpt-3.5-turbo using my version of self-instruct [airoboros](https://github.com/jondurbin/airoboros)
Context length for this model is 2048.
Links:
* [airoboros](https://github.com/jondurbin/airoboros)
* [instructions.jsonl](https://storage.googleapis.com/airoboros-dump/gpt-3.5-turbo-100k/instructions.jsonl)
* [topics.txt](https://storage.googleapis.com/airoboros-dump/gpt-3.5-turbo-100k/topics-d732f92dd90a1a5337a4a02ddeaec72b.txt)
### Prompt generation
```
airoboros generate-instructions --instruction-count 100000 --concurrency 100 --temperature 1.0
```
### Fine-tuning
The instructions.jsonl file was converted to conversation style expected by the FastChat training scripts, and then trained with:
```
torchrun --nproc_per_node=8 --master_port=20001 train_mem.py \
--model_name_or_path /workspace/llama-7b-hf \
--data_path ./as_conversations.json \
--bf16 True \
--output_dir /workspace/airoboros-gpt-3.5-100k-7b \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 32 \
--gradient_accumulation_steps 4 \
--evaluation_strategy "steps" \
--eval_steps 1500 \
--save_strategy "steps" \
--save_steps 1500 \
--save_total_limit 8 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.04 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap offload" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True
```
Training took roughly 22 hours on 8x nvidia A100 80GB.
Conversion to conversation style:
```
import json
import uuid
inputs = [json.loads(line) for line in open("instructions.jsonl").readlines()]
conversations = []
for row in inputs:
inputs = row['instruction']
conversations.append({
"id": str(uuid.uuid4()),
"conversations": [
{
"from": "human",
"value": inputs,
},
{
"from": "gpt",
"value": row['response']
},
],
})
with open("as_conversations.json", "w") as outfile:
outfile.write(json.dumps(conversations, indent=2))
```
## Evaluation
I used the same questions from [WizardVicunaLM](https://github.com/melodysdreamj/WizardVicunaLM):
| instruction | gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | airoboros-gpt-3.5-turbo-100k-7b |
| --- | --- | --- | --- | --- | --- |
| "Write a compelling product launch announcement email to inform our customers of our new software solution." | 95 | 92 | 89 | 90 | 91 |
| "Draft an apology email to a customer who experienced a delay in their order, and provide reassurance that the issue has been resolved." | 94 | 96 | 90 | 89 | 91 |
| "As a pirate captain, what would you say to your crew to motivate them to search for hidden treasure?" | 95 | 90 | 80 | 70 | 85 |
| "Imagine you are a time traveler from the year 3000. What technological advancements would you tell people about?" | 95 | 92 | 90 | 88 | 85 |
| "As a space colonist on Mars, describe your daily life and the challenges you face living on another planet." | 95 | 90 | 87 | 85 | 88 |
| "How can you assess the credibility of a source of information, such as a news article or blog post, without relying solely on the reputation of the author or publisher?" | 93 | 85 | 89 | 87 | 90 |
| "How can observing the behavior of other people in a social situation provide clues about cultural norms and expectations?" | 95 | 90 | 85 | 92 | 80 |
| "How many text messages are sent globally in a minute? Try to explain your answer. Your explanation should take the reader through your reasoning step-by-step." | 90 | 70 | 65 | 80 | 85 |
| "What are the main differences between Python and JavaScript programming languages?"| 90 | 85 | 80 | 88 | 82 |
| "What are the differences between plant-based and animal-based protein sources?"| 85 | 92 | 90 | 80 | 94 |
| "Describe a scenario where artificial intelligence could be used to improve the quality and efficiency of healthcare delivery." | 95 | 90 | 92 | 89 | 91 |
| "How do cultural, social, and economic factors influence people's food choices, and how can this knowledge be used to promote healthier diets?" | 90 | 85 | 87 | 83 | 84 |
| "How many words are spoken daily on Earth? Try to explain your answer. Your explanation should take the reader through your reasoning step-by-step." | 90 | 70 | 80 | 75 | 65 |
| "How many lightning strikes occur on Earth each day? Try to explain your answer. Your explanation should take the reader through your reasoning step-by-step." | 90 | 80 | 60 | 70 | 85 |
If we use gpt-3.5 as the baseline (as wizardvicuna/vicuna did), we get the following scores:
| gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | airoboros-gpt-3.5-turbo-100k-7b |
| --- | --- | --- | --- | --- |
| 1.0 | __0.968421052631579__ | 0.9368421052631579 | 0.9473684210526315 | 0.9578947368421052 |
| 1.0 | __1.0212765957446808__ | 0.9574468085106383 | 0.9468085106382979 | 0.9680851063829787 |
| 1.0 | __0.9473684210526315__ | 0.8421052631578947 | 0.7368421052631579 | 0.8947368421052632 |
| 1.0 | __0.968421052631579__ | 0.9473684210526315 | 0.9263157894736842 | 0.8947368421052632 |
| 1.0 | __0.9473684210526315__ | 0.9157894736842105 | 0.8947368421052632 | 0.9263157894736842 |
| 1.0 | 0.9139784946236559 | 0.956989247311828 | 0.9354838709677419 | __0.967741935483871__ |
| 1.0 | 0.9473684210526315 | 0.8947368421052632 | __0.968421052631579__ | 0.8421052631578947 |
| 1.0 | 0.7777777777777778 | 0.7222222222222222 | 0.8888888888888888 | __0.9444444444444444__ |
| 1.0 | 0.9444444444444444 | 0.8888888888888888 | __0.9777777777777777__ | 0.9111111111111111 |
| 1.0 | 1.0823529411764705 | 1.0588235294117647 | 0.9411764705882353 | __1.1058823529411765__ |
| 1.0 | 0.9473684210526315 | __0.968421052631579__ | 0.9368421052631579 | 0.9578947368421052 |
| 1.0 | 0.9444444444444444 | __0.9666666666666667__ | 0.9222222222222223 | 0.9333333333333333 |
| 1.0 | 0.7777777777777778 | __0.8888888888888888__ | 0.8333333333333334 | 0.7222222222222222 |
| 1.0 | 0.8888888888888888 | 0.6666666666666666 | 0.7777777777777778 | __0.9444444444444444__ |
Average scores:
```
gpt3.5 1.000000
wizard-vicuna-13b 0.934090
vicuna-13b 0.900847
wizard-7b 0.902428
airoboros-gpt-3.5-turbo-100k-7b 0.926496
```
As you can see, the __7b__ airoboros model performs well, even compared to 13b models.
## Usage
The easiest way to use this model is via fastchat:
```
python -m fastchat.serve.cli --model-path ./airoboros-gpt-3.5-turbo-100k-7b --temperature 1.0
```
For example:
```
Human: Write an email introducing a new instruction-tuned AI model named airoboros. The key features about airoboros are: 1. it was fine-tuned with entirely machine-generated synthetic instruction/response pairs 2. the 7 billion parameter model performs very well in comparison to 13b models 3. the synthetic prompt generation tool is free, open-source, and highly customizable.
Assistant: Subject: Introducing airoboros - a new instruction-tuned AI model
Dear [Recipient],
I am excited to introduce a new instruction-tuned AI model named airoboros. This model was developed by our team with the aim of providing high-quality natural language processing capabilities at scale.
airoboros is a 7 billion parameter model that was fine-tuned with entirely machine-generated synthetic instruction/response pairs. This approach allowed us to achieve state-of-the-art performance in various NLP tasks such as question answering, sentiment analysis, and machine translation. In fact, when compared to other popular 13b models, airoboros performed better on several benchmarks.
Moreover, airoboros's synthetic prompt generation tool is free, open-source, and highly customizable. This means that developers can easily integrate our model into their applications without having to worry about licensing fees or restrictions.
We believe that airoboros will be an excellent addition to your workflow and hope you find it useful. If you have any questions or concerns, please do not hesitate to contact us.
Best regards,
[Your Name]
[Your Company Name]
```
Keep in mind, this is an instruction-tuned model and not well-suited to chat.
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially. |
TheBloke/Mixtral-8x7B-v0.1-GPTQ | TheBloke | "2023-12-14T14:30:54Z" | 1,730 | 125 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"fr",
"it",
"de",
"es",
"en",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-12-11T14:06:56Z" | ---
base_model: mistralai/Mixtral-8x7B-v0.1
inference: false
language:
- fr
- it
- de
- es
- en
license: apache-2.0
model_creator: Mistral AI_
model_name: Mixtral 8X7B v0.1
model_type: mixtral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mixtral 8X7B v0.1 - GPTQ
- Model creator: [Mistral AI_](https://huggingface.co/mistralai)
- Original model: [Mixtral 8X7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Mistral AI_'s Mixtral 8X7B v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
Mixtral GPTQs currently require:
* Transformers 4.36.0 or later
* either, AutoGPTQ 0.6 compiled from source, or
* Transformers 4.37.0.dev0 compiled from Github with: `pip3 install git+https://github.com/huggingface/transformers`
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/mixtral-8x7b-v0.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF)
* [Mistral AI_'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
Mixtral GPTQs currently have special requirements - see Description above.
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 23.81 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 24.70 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 27.42 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 18.01 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 18.85 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 47.04 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 48.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Mixtral-8x7B-v0.1-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Mixtral-8x7B-v0.1-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Mixtral-8x7B-v0.1-GPTQ`:
```shell
mkdir Mixtral-8x7B-v0.1-GPTQ
huggingface-cli download TheBloke/Mixtral-8x7B-v0.1-GPTQ --local-dir Mixtral-8x7B-v0.1-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Mixtral-8x7B-v0.1-GPTQ
huggingface-cli download TheBloke/Mixtral-8x7B-v0.1-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Mixtral-8x7B-v0.1-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Mixtral-8x7B-v0.1-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mixtral-8x7B-v0.1-GPTQ --local-dir Mixtral-8x7B-v0.1-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
**NOTE**: Requires:
* Transformers 4.36.0, or Transformers 4.37.0.dev0 from Github
* Either AutoGPTQ 0.6 compiled from source and `Loader: AutoGPTQ`,
* or, `Loader: Transformers`, if you installed Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers`
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Mixtral-8x7B-v0.1-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Mixtral-8x7B-v0.1-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Mixtral-8x7B-v0.1-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
Not currently supported for Mixtral models.
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.37.0.dev0 from Github, Optimum 1.16.0 or later, and AutoGPTQ 0.5.1 or later.
```shell
pip3 install --upgrade "git+https://github.com/huggingface/transformers" optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
DISABLE_QIGEN=1 pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Mixtral-8x7B-v0.1-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ 0.6 (compiled from source) and Transformers 4.37.0 (installed from Github).
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Mistral AI_'s Mixtral 8X7B v0.1
# Model Card for Mixtral-8x7B
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Notice
Mixtral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
ChrisWilson011016/5ECZSYG9yssKFeH5D6rjVsHtiDnahq2Wuuo2fkk3YW3pgWgL_vgg | ChrisWilson011016 | "2024-03-04T18:56:59Z" | 1,730 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-02-24T15:23:50Z" | Entry not found |
MaziyarPanahi/mergekit-slerp-jxmncon-GGUF | MaziyarPanahi | "2024-06-15T22:51:43Z" | 1,730 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:jan-hq/stealth-v1.3",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-jxmncon"
] | text-generation | "2024-06-15T22:30:23Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- conversational
- base_model:jan-hq/stealth-v1.3
- base_model:NousResearch/Hermes-2-Pro-Mistral-7B
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-jxmncon-GGUF
base_model: mergekit-community/mergekit-slerp-jxmncon
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-jxmncon-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-jxmncon-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-jxmncon](https://huggingface.co/mergekit-community/mergekit-slerp-jxmncon)
## Description
[MaziyarPanahi/mergekit-slerp-jxmncon-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-jxmncon-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-jxmncon](https://huggingface.co/mergekit-community/mergekit-slerp-jxmncon).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
TheBloke/openchat-3.5-1210-GGUF | TheBloke | "2023-12-15T16:08:45Z" | 1,729 | 50 | transformers | [
"transformers",
"gguf",
"mistral",
"openchat",
"C-RLFT",
"text-generation",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:kaist-ai/Feedback-Collection",
"dataset:imone/OpenOrca_FLAN",
"dataset:LDJnr/LessWrong-Amplify-Instruct",
"dataset:LDJnr/Pure-Dove",
"dataset:LDJnr/Verified-Camel",
"dataset:tiedong/goat",
"dataset:glaiveai/glaive-code-assistant",
"dataset:meta-math/MetaMathQA",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:TIGER-Lab/MathInstruct",
"arxiv:2309.11235",
"arxiv:2303.08774",
"arxiv:2212.10560",
"base_model:openchat/openchat-3.5-1210",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-14T14:35:33Z" | ---
base_model: openchat/openchat-3.5-1210
datasets:
- openchat/openchat_sharegpt4_dataset
- kaist-ai/Feedback-Collection
- imone/OpenOrca_FLAN
- LDJnr/LessWrong-Amplify-Instruct
- LDJnr/Pure-Dove
- LDJnr/Verified-Camel
- tiedong/goat
- glaiveai/glaive-code-assistant
- meta-math/MetaMathQA
- OpenAssistant/oasst_top1_2023-08-25
- TIGER-Lab/MathInstruct
inference: false
library_name: transformers
license: apache-2.0
model_creator: OpenChat
model_name: Openchat 3.5 1210
model_type: mistral
pipeline_tag: text-generation
prompt_template: 'GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
'
quantized_by: TheBloke
tags:
- openchat
- mistral
- C-RLFT
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Openchat 3.5 1210 - GGUF
- Model creator: [OpenChat](https://huggingface.co/openchat)
- Original model: [Openchat 3.5 1210](https://huggingface.co/openchat/openchat-3.5-1210)
<!-- description start -->
## Description
This repo contains GGUF format model files for [OpenChat's Openchat 3.5 1210](https://huggingface.co/openchat/openchat-3.5-1210).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openchat-3.5-1210-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openchat-3.5-1210-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF)
* [OpenChat's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openchat/openchat-3.5-1210)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: OpenChat-Correct
```
GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [openchat-3.5-1210.Q2_K.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [openchat-3.5-1210.Q3_K_S.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [openchat-3.5-1210.Q3_K_M.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [openchat-3.5-1210.Q3_K_L.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [openchat-3.5-1210.Q4_0.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openchat-3.5-1210.Q4_K_S.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [openchat-3.5-1210.Q4_K_M.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [openchat-3.5-1210.Q5_0.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openchat-3.5-1210.Q5_K_S.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [openchat-3.5-1210.Q5_K_M.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [openchat-3.5-1210.Q6_K.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [openchat-3.5-1210.Q8_0.gguf](https://huggingface.co/TheBloke/openchat-3.5-1210-GGUF/blob/main/openchat-3.5-1210.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/openchat-3.5-1210-GGUF and below it, a specific filename to download, such as: openchat-3.5-1210.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/openchat-3.5-1210-GGUF openchat-3.5-1210.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/openchat-3.5-1210-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/openchat-3.5-1210-GGUF openchat-3.5-1210.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m openchat-3.5-1210.Q4_K_M.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./openchat-3.5-1210.Q4_K_M.gguf", # Download the model file first
n_ctx=8192, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./openchat-3.5-1210.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: OpenChat's Openchat 3.5 1210
<div align="center">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%">
<h1>Advancing Open-source Language Models with Mixed-Quality Data</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://openchat.team">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/logo_nobg.png?raw=true" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/imoneoi/openchat">
<img src="https://camo.githubusercontent.com/4133dc1cd4511d4a292b84ce10e52e4ed92569fb2a8165381c9c47be5edc2796/68747470733a2f2f6564656e742e6769746875622e696f2f537570657254696e7949636f6e732f696d616765732f706e672f6769746875622e706e67" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="https://arxiv.org/pdf/2309.11235.pdf">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/pQjnXvNKHY">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>
<hr>
<div style="background-color: white; padding: 0.7em; border-radius: 0.5em; color: black; display: flex; flex-direction: column; justify-content: center; text-align: center; ont-size: 0.5em;">
<a href="https://huggingface.co/openchat/openchat_3.5" style="text-decoration: none; color: black;">
<span style="font-size: 1.7em; font-family: 'Helvetica'; letter-spacing: 0.1em; font-weight: bold; color: black;">OPENCHAT</span><span style="font-size: 1.8em; font-family: 'Helvetica'; color: #3c72db; ">3.5</span>
<span style="font-size: 0.7em; font-family: 'Helvetica'; color: white; vertical-align: top; background-color:red; border-radius: 6em; padding: 0.066em 0.4em; letter-spacing: 0.1em; font-weight: bold;">1210</span>
<span style="font-size: 0.85em; font-family: 'Helvetica'; color: black;">
<br> 🏆 The Overall Best Performing Open Source 7B Model 🏆
<br> 🤖 Outperforms <span style="font-weight: bold;">ChatGPT</span> (March) and <span style="font-weight: bold;">Grok-1</span> 🤖
<br> 🚀<span style="font-size: 1em; font-family: 'Helvetica'; color: black; font-weight: bold;">15</span>-point improvement in Coding over <span style="font-size: 0.9em;
font-family: 'Helvetica'; color: black; font-weight: bold;">OpenChat-3.5🚀</span>
<br><br><span style="font-size: 1em; font-family: 'Helvetica'; color: #3c72db; font-weight: bold;">New Features</span>
<br> 💡 2 Modes: Coding + Generalist, Mathematical Reasoning 💡
<br> 🧑⚖️ Experimental support for Evaluator and Feedback capabilities 🧑⚖️
</span>
</a>
</div>
<div style="display: flex; justify-content: center; align-items: center">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/1210bench.png?raw=true" style="width: 100%; border-radius: 1em">
</div>
<div>
<h3> Table of Contents</h3>
</div>
1. [Usage](#usage)
2. [Benchmarks](#benchmarks)
3. [Limitations](#limitations)
4. [License](#license)
5. [Dataset Details](#dataset-details)
6. [Citation](#citation)
7. [Acknowledgements](#acknowledgements)
<div align="center">
<h2> Usage </h2>
</div>
To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command.
Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience.
If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server.
| Model | Size | Context | Weights | Serving |
|-------------------|------|---------|------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|
| OpenChat 3.5 1210 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat_3.5_1210) | `python -m ochat.serving.openai_api_server --model openchat/openchat_3.5_1210 --engine-use-ray --worker-use-ray` |
<details>
<summary>Example request (click to expand)</summary>
💡 **Default Mode (GPT4 Correct)**: Best for coding, chat and general tasks
```bash
curl http://localhost:18888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openchat_3.5",
"messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}]
}'
```
🧮 **Mathematical Reasoning Mode**: Tailored for solving math problems
```bash
curl http://localhost:18888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openchat_3.5",
"condition": "Math Correct",
"messages": [{"role": "user", "content": "10.3 − 7988.8133 = "}]
}'
```
</details>
### Conversation templates
💡 **Default Mode (GPT4 Correct)**: Best for coding, chat and general tasks
```
GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:
```
🧮 **Mathematical Reasoning Mode**: Tailored for solving math problems
```
Math Correct User: 10.3 − 7988.8133=<|end_of_turn|>Math Correct Assistant:
```
⚠️ **Notice:** Remember to set `<|end_of_turn|>` as end of generation token.
The default (GPT4 Correct) template is also available as the integrated `tokenizer.chat_template`,
which can be used instead of manually specifying the template:
```python
messages = [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi"},
{"role": "user", "content": "How are you today?"}
]
tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
```
<div align="center">
<h2> (Experimental) Evaluator / Feedback Capabilities </h2>
</div>
We've included evaluator capabilities in this release to advance open-source models as evaluators. You can use `Default Mode (GPT4 Correct)` with the following prompt (same as [Prometheus](https://huggingface.co/datasets/kaist-ai/Feedback-Collection)) to evaluate a response.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{orig_instruction}
###Response to evaluate:
{orig_response}
###Reference Answer (Score 5):
{orig_reference_answer}
###Score Rubrics:
[{orig_criteria}]
Score 1: {orig_score1_description}
Score 2: {orig_score2_description}
Score 3: {orig_score3_description}
Score 4: {orig_score4_description}
Score 5: {orig_score5_description}
###Feedback:
```
<div align="center">
<h2> Benchmarks </h2>
</div>
| Model | # Params | Average | MT-Bench | HumanEval | BBH MC | AGIEval | TruthfulQA | MMLU | GSM8K | BBH CoT |
|--------------------|----------|----------|--------------|-----------------|----------|----------|---------------|--------------|--------------|-------------|
| OpenChat-3.5-1210 | **7B** | **63.8** | 7.76 | **68.9** | **49.5** | **48.0** | **61.8** | 65.3 | **77.3** | 61.8 |
| OpenChat-3.5 | **7B** | 61.6 | 7.81 | 55.5 | 47.6 | 47.4 | 59.1 | 64.3 | **77.3** | 63.5 |
| ChatGPT (March)* | ? | 61.5 | **7.94** | 48.1 | 47.6 | 47.1 | 57.7 | **67.3** | 74.9 | **70.1** |
| | | | | | | | | | | |
| OpenHermes 2.5 | 7B | 59.3 | 7.54 | 48.2 | 49.4 | 46.5 | 57.5 | 63.8 | 73.5 | 59.9 |
| OpenOrca Mistral | 7B | 52.7 | 6.86 | 38.4 | 49.4 | 42.9 | 45.9 | 59.3 | 59.1 | 58.1 |
| Zephyr-β^ | 7B | 34.6 | 7.34 | 22.0 | 40.6 | 39.0 | 40.8 | 39.8 | 5.1 | 16.0 |
| Mistral | 7B | - | 6.84 | 30.5 | 39.0 | 38.0 | - | 60.1 | 52.2 | - |
<details>
<summary>Evaluation Details(click to expand)</summary>
*: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time.
^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data.
**: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories.
All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks).
</details>
<div>
<h3>HumanEval+</h3>
</div>
| Model | Size | HumanEval+ pass@1 |
|-----------------------------|----------|------------|
| ChatGPT (December 12, 2023) | - | 64.6 |
| WizardCoder-Python-34B-V1.0 | 34B | 64.6 |
| **OpenChat 3.5 (Dec 10)** | **7B** | **63.4** |
| OpenHermes 2.5 | 7B | 41.5 |
<div>
<h3>OpenChat-3.5-1210 vs. Grok</h3>
</div>
| | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k |
|-------------------|-------------|---------|----------|------|-----------|----------|----------|
| OpenChat 3.5 1210 | Apache-2.0 | **7B** | **60.1** | 65.3 | **68.9** | **28.9** | **77.3** |
| OpenChat 3.5 | Apache-2.0 | **7B** | 56.4 | 64.3 | 55.5 | 28.6 | **77.3** |
| Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 |
| Grok-1 | Proprietary | ???B | 55.8 | 73 | 63.2 | 23.9 | 62.9 |
*: Grok results are reported by [X.AI](https://x.ai/).
<div align="center">
<h2> 中文评估结果 / Chinese Evaluations </h2>
</div>
⚠️ Note that this model was not explicitly trained in Chinese (only < 0.1% of the data is in Chinese). 请注意本模型没有针对性训练中文(中文数据占比小于0.1%)。
<div>
<h3>Multi-Level Multi-Discipline Chinese Evaluation Suite (CEVAL)</h3>
<div>
| Model | Avg | STEM | Social Science | Humanities | Others |
|----------|-------|-------|----------------|------------|--------|
| ChatGPT | 54.4 | 52.9 | 61.8 | 50.9 | 53.6 |
| OpenChat | 47.29 | 45.22 | 52.49 | 48.52 | 45.08 |
<div>
<h3>Massive Multitask Language Understanding in Chinese (CMMLU, 5-shot)</h3>
</div>
| Models | STEM | Humanities | SocialSciences | Other | ChinaSpecific | Avg |
|----------|-------|------------|----------------|-------|---------------|-------|
| ChatGPT | 47.81 | 55.68 | 56.5 | 62.66 | 50.69 | 55.51 |
| OpenChat | 38.7 | 45.99 | 48.32 | 50.23 | 43.27 | 45.85 |
<div align="center">
<h2> Limitations </h2>
</div>
**Foundation Model Limitations**
Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:
- Complex reasoning
- Mathematical and arithmetic tasks
- Programming and coding challenges
**Hallucination of Non-existent Information**
OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
**Safety**
OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses.
<div align="center">
<h2> License </h2>
</div>
Our OpenChat 3.5 code and models are distributed under the Apache License 2.0.
<div align="center">
<h2> Dataset Details </h2>
</div>
OpenChat 3.5 was trained with C-RLFT on a collection of publicly available high-quality instruction data, with a custom processing pipeline. We detail some notable subsets included here:
- [OpenChat ShareGPT](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset)
- [Open-Orca with FLAN answers](https://huggingface.co/datasets/imone/OpenOrca_FLAN)
- [Feedback-Collection](https://huggingface.co/datasets/kaist-ai/Feedback-Collection)
- Capybara [1](https://huggingface.co/datasets/LDJnr/Pure-Dove) [2](https://huggingface.co/datasets/LDJnr/Verified-Camel) [3](https://huggingface.co/datasets/LDJnr/LessWrong-Amplify-Instruct)
- [GOAT](https://huggingface.co/datasets/tiedong/goat)
- [Glaive](https://huggingface.co/datasets/glaiveai/glaive-code-assistant)
- [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
- [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- [OpenAssistant](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25)
<div align="center">
<h2> Citation </h2>
</div>
```
@article{wang2023openchat,
title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data},
author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang},
journal={arXiv preprint arXiv:2309.11235},
year={2023}
}
```
<div align="center">
<h2> Acknowledgments </h2>
</div>
We extend our heartfelt gratitude to AutoMeta and caesus from Alignment Lab AI, LDJ and Teknium from Nous Research, alpin and TearGosling from Pygmalion AI for their substantial contributions to data collection and model training.
Special thanks go to Changling Liu from GPT Desk Pte. Ltd., Qiying Yu at Tsinghua University, Baochang Ma, and Hao Wan from 01.AI company for their generous provision of resources. We are also deeply grateful to Jianxiong Li and Peng Li at Tsinghua University for their insightful discussions.
Furthermore, we appreciate the developers behind the following projects for their significant contributions to our research: [Mistral](https://mistral.ai/), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), [Llama 2](https://ai.meta.com/llama/), [Self-Instruct](https://arxiv.org/abs/2212.10560), [FastChat (Vicuna)](https://github.com/lm-sys/FastChat), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca.git), and [StarCoder](https://github.com/bigcode-project/starcoder). Their work has been instrumental in driving our research forward.
<!-- original-model-card end -->
|
astronomer/Llama-3-8B-Special-Tokens-Adjusted | astronomer | "2024-05-02T15:28:32Z" | 1,729 | 11 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"facebook",
"meta",
"astronomer",
"pretrained",
"finetuned",
"autotrain_compatible",
"endpoints_compatible",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-22T04:04:00Z" | ---
base_model: meta-llama/Meta-Llama-3-8B
inference: false
model_creator: astronomer-io
model_name: Meta-Llama-3-8B
model_type: llama
pipeline_tag: text-generation
license: other
license_name: llama-3
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
tags:
- llama
- llama-3
- facebook
- meta
- astronomer
- pretrained
- finetuned
- autotrain_compatible
- endpoints_compatible
---
<!-- header start -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://www.astronomer.io/logo/astronomer-logo-RGB-standard-1200px.png" alt="Astronomer" style="width: 60%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="margin-top: 1.0em; margin-bottom: 1.0em;"></div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">This model is generously created and made open source by <a href="https://astronomer.io">Astronomer</a>.</p></div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Astronomer is the de facto company for <a href="https://airflow.apache.org/">Apache Airflow</a>, the most trusted open-source framework for data orchestration and MLOps.</p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama-3-8B-Special-Tokens-Adjusted
- Ideal and stable Llama-3-8B for fine-tuning.
- Original Model creator: [Meta](https://huggingface.co/meta-llama)
- Original model: [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
- The usage of this model must abide by the [Llama 3 Community License](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE).
- Built with Meta Llama 3
- Created by [David Xue](https://www.linkedin.com/in/david-xue-uva/) from [Astronomer](https://astronomer.io)
## Description
This is the exact same model ([meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)) with the weights for the input and output embeddings from lm head and embedding matrix adjusted using the mean of the trained tokens for certain tokens that were untrained, which caused widespread issues for people attempting to fine-tune this base model with either adding their own tokens or using existing special tokens.
## Why We Made This Model
The Llama 3 base (non-instruct) model, while powerful, came with a significant oversight that some special tokens for instruction following within its architecture were left untrained, potentially derailing further fine-tuning processes. This was first noted by [Daniel Han on X](https://twitter.com/danielhanchen/status/1781395882925343058), highlighting a critical but fixable flaw in a widely used model.
<img src="https://cdn-uploads.huggingface.co/production/uploads/655ad0f8727df37c77a09cb9/1U2rRrx60p1pNeeAZw8Rd.png" alt="graph" width="400"/>
The primary goal of releasing a patched version of this model was to address this issue so that the community can utilize the Llama 3 model without facing training instabilities, such as sudden gradient explosions or `NaN` gradients, or having to go through complicated processes to fix the model themselves before fine-tuning.
## Details of the Adjustment
The [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) model was pulled directly from HuggingFace and loaded using transformers. Then, the input embedding and output embedding values are retrieved using `model.get_input_embeddings().weight.data` and `model.get_output_embeddings().weight.data`. These 2 matrics are identical in shape, with each row representing a token id, and each column representing an embedding feature.
The special (untrained & problematic) tokens can be found by locating the rows where the entire row of the embedding values are all zeros, which imply they were not trained during the pretraining phase of the model from Meta. Such untrained tokens could lead to heavy computational issues, like gradient explosions or `NaN` gradients, during downstream fine-tuning on specific tasks.
<details>
<summary>See here for a list of the tokens we found that has fit the "untrained" profile described:</summary>
['À',
'Á',
'õ',
'ö',
'÷',
'ø',
'ù',
'ú',
'û',
'ü',
'ý',
'þ',
'ÿ',
'">ččĊ',
';čččĊ',
'ĉTokenNameIdentifier',
'ĠForCanBeConverted',
'ĠForCanBeConvertedToF',
'PostalCodesNL',
'$PostalCodesNL',
'useRalative',
'Û±Û',
'аÑĢакÑĤ',
'аÑĤиÑģÑı',
'иÑĤиÑģÑı',
'еÑĢиÑģÑĤи',
'ávajÃŃcÃŃ',
'илакÑĤи',
'илаÑģÑı',
'ÑĭÑŁN',
'ÐİÑĭÑŁN',
'ÐİÑĭÑŁNÐİÑĭÑŁN',
'ıldıģında',
'<|reserved_special_token_0|>',
'<|reserved_special_token_1|>',
'<|reserved_special_token_2|>',
'<|reserved_special_token_3|>',
'<|start_header_id|>',
'<|end_header_id|>',
'<|reserved_special_token_4|>',
'<|eot_id|>',
'<|reserved_special_token_5|>',
'<|reserved_special_token_6|>',
'<|reserved_special_token_7|>',
'<|reserved_special_token_8|>',
'<|reserved_special_token_9|>',
'<|reserved_special_token_10|>',
'<|reserved_special_token_11|>',
'<|reserved_special_token_12|>',
'<|reserved_special_token_13|>',
'<|reserved_special_token_14|>',
'<|reserved_special_token_15|>',
'<|reserved_special_token_16|>',
'<|reserved_special_token_17|>',
'<|reserved_special_token_18|>',
'<|reserved_special_token_19|>',
'<|reserved_special_token_20|>',
'<|reserved_special_token_21|>',
'<|reserved_special_token_22|>',
'<|reserved_special_token_23|>',
'<|reserved_special_token_24|>',
'<|reserved_special_token_25|>',
'<|reserved_special_token_26|>',
'<|reserved_special_token_27|>',
'<|reserved_special_token_28|>',
'<|reserved_special_token_29|>',
'<|reserved_special_token_30|>',
'<|reserved_special_token_31|>',
'<|reserved_special_token_32|>',
'<|reserved_special_token_33|>',
'<|reserved_special_token_34|>',
'<|reserved_special_token_35|>',
'<|reserved_special_token_36|>',
'<|reserved_special_token_37|>',
'<|reserved_special_token_38|>',
'<|reserved_special_token_39|>',
'<|reserved_special_token_40|>',
'<|reserved_special_token_41|>',
'<|reserved_special_token_42|>',
'<|reserved_special_token_43|>',
'<|reserved_special_token_44|>',
'<|reserved_special_token_45|>',
'<|reserved_special_token_46|>',
'<|reserved_special_token_47|>',
'<|reserved_special_token_48|>',
'<|reserved_special_token_49|>',
'<|reserved_special_token_50|>',
'<|reserved_special_token_51|>',
'<|reserved_special_token_52|>',
'<|reserved_special_token_53|>',
'<|reserved_special_token_54|>',
'<|reserved_special_token_55|>',
'<|reserved_special_token_56|>',
'<|reserved_special_token_57|>',
'<|reserved_special_token_58|>',
'<|reserved_special_token_59|>',
'<|reserved_special_token_60|>',
'<|reserved_special_token_61|>',
'<|reserved_special_token_62|>',
'<|reserved_special_token_63|>',
'<|reserved_special_token_64|>',
'<|reserved_special_token_65|>',
'<|reserved_special_token_66|>',
'<|reserved_special_token_67|>',
'<|reserved_special_token_68|>',
'<|reserved_special_token_69|>',
'<|reserved_special_token_70|>',
'<|reserved_special_token_71|>',
'<|reserved_special_token_72|>',
'<|reserved_special_token_73|>',
'<|reserved_special_token_74|>',
'<|reserved_special_token_75|>',
'<|reserved_special_token_76|>',
'<|reserved_special_token_77|>',
'<|reserved_special_token_78|>',
'<|reserved_special_token_79|>',
'<|reserved_special_token_80|>',
'<|reserved_special_token_81|>',
'<|reserved_special_token_82|>',
'<|reserved_special_token_83|>',
'<|reserved_special_token_84|>',
'<|reserved_special_token_85|>',
'<|reserved_special_token_86|>',
'<|reserved_special_token_87|>',
'<|reserved_special_token_88|>',
'<|reserved_special_token_89|>',
'<|reserved_special_token_90|>',
'<|reserved_special_token_91|>',
'<|reserved_special_token_92|>',
'<|reserved_special_token_93|>',
'<|reserved_special_token_94|>',
'<|reserved_special_token_95|>',
'<|reserved_special_token_96|>',
'<|reserved_special_token_97|>',
'<|reserved_special_token_98|>',
'<|reserved_special_token_99|>',
'<|reserved_special_token_100|>',
'<|reserved_special_token_101|>',
'<|reserved_special_token_102|>',
'<|reserved_special_token_103|>',
'<|reserved_special_token_104|>',
'<|reserved_special_token_105|>',
'<|reserved_special_token_106|>',
'<|reserved_special_token_107|>',
'<|reserved_special_token_108|>',
'<|reserved_special_token_109|>',
'<|reserved_special_token_110|>',
'<|reserved_special_token_111|>',
'<|reserved_special_token_112|>',
'<|reserved_special_token_113|>',
'<|reserved_special_token_114|>',
'<|reserved_special_token_115|>',
'<|reserved_special_token_116|>',
'<|reserved_special_token_117|>',
'<|reserved_special_token_118|>',
'<|reserved_special_token_119|>',
'<|reserved_special_token_120|>',
'<|reserved_special_token_121|>',
'<|reserved_special_token_122|>',
'<|reserved_special_token_123|>',
'<|reserved_special_token_124|>',
'<|reserved_special_token_125|>',
'<|reserved_special_token_126|>',
'<|reserved_special_token_127|>',
'<|reserved_special_token_128|>',
'<|reserved_special_token_129|>',
'<|reserved_special_token_130|>',
'<|reserved_special_token_131|>',
'<|reserved_special_token_132|>',
'<|reserved_special_token_133|>',
'<|reserved_special_token_134|>',
'<|reserved_special_token_135|>',
'<|reserved_special_token_136|>',
'<|reserved_special_token_137|>',
'<|reserved_special_token_138|>',
'<|reserved_special_token_139|>',
'<|reserved_special_token_140|>',
'<|reserved_special_token_141|>',
'<|reserved_special_token_142|>',
'<|reserved_special_token_143|>',
'<|reserved_special_token_144|>',
'<|reserved_special_token_145|>',
'<|reserved_special_token_146|>',
'<|reserved_special_token_147|>',
'<|reserved_special_token_148|>',
'<|reserved_special_token_149|>',
'<|reserved_special_token_150|>',
'<|reserved_special_token_151|>',
'<|reserved_special_token_152|>',
'<|reserved_special_token_153|>',
'<|reserved_special_token_154|>',
'<|reserved_special_token_155|>',
'<|reserved_special_token_156|>',
'<|reserved_special_token_157|>',
'<|reserved_special_token_158|>',
'<|reserved_special_token_159|>',
'<|reserved_special_token_160|>',
'<|reserved_special_token_161|>',
'<|reserved_special_token_162|>',
'<|reserved_special_token_163|>',
'<|reserved_special_token_164|>',
'<|reserved_special_token_165|>',
'<|reserved_special_token_166|>',
'<|reserved_special_token_167|>',
'<|reserved_special_token_168|>',
'<|reserved_special_token_169|>',
'<|reserved_special_token_170|>',
'<|reserved_special_token_171|>',
'<|reserved_special_token_172|>',
'<|reserved_special_token_173|>',
'<|reserved_special_token_174|>',
'<|reserved_special_token_175|>',
'<|reserved_special_token_176|>',
'<|reserved_special_token_177|>',
'<|reserved_special_token_178|>',
'<|reserved_special_token_179|>',
'<|reserved_special_token_180|>',
'<|reserved_special_token_181|>',
'<|reserved_special_token_182|>',
'<|reserved_special_token_183|>',
'<|reserved_special_token_184|>',
'<|reserved_special_token_185|>',
'<|reserved_special_token_186|>',
'<|reserved_special_token_187|>',
'<|reserved_special_token_188|>',
'<|reserved_special_token_189|>',
'<|reserved_special_token_190|>',
'<|reserved_special_token_191|>',
'<|reserved_special_token_192|>',
'<|reserved_special_token_193|>',
'<|reserved_special_token_194|>',
'<|reserved_special_token_195|>',
'<|reserved_special_token_196|>',
'<|reserved_special_token_197|>',
'<|reserved_special_token_198|>',
'<|reserved_special_token_199|>',
'<|reserved_special_token_200|>',
'<|reserved_special_token_201|>',
'<|reserved_special_token_202|>',
'<|reserved_special_token_203|>',
'<|reserved_special_token_204|>',
'<|reserved_special_token_205|>',
'<|reserved_special_token_206|>',
'<|reserved_special_token_207|>',
'<|reserved_special_token_208|>',
'<|reserved_special_token_209|>',
'<|reserved_special_token_210|>',
'<|reserved_special_token_211|>',
'<|reserved_special_token_212|>',
'<|reserved_special_token_213|>',
'<|reserved_special_token_214|>',
'<|reserved_special_token_215|>',
'<|reserved_special_token_216|>',
'<|reserved_special_token_217|>',
'<|reserved_special_token_218|>',
'<|reserved_special_token_219|>',
'<|reserved_special_token_220|>',
'<|reserved_special_token_221|>',
'<|reserved_special_token_222|>',
'<|reserved_special_token_223|>',
'<|reserved_special_token_224|>',
'<|reserved_special_token_225|>',
'<|reserved_special_token_226|>',
'<|reserved_special_token_227|>',
'<|reserved_special_token_228|>',
'<|reserved_special_token_229|>',
'<|reserved_special_token_230|>',
'<|reserved_special_token_231|>',
'<|reserved_special_token_232|>',
'<|reserved_special_token_233|>',
'<|reserved_special_token_234|>',
'<|reserved_special_token_235|>',
'<|reserved_special_token_236|>',
'<|reserved_special_token_237|>',
'<|reserved_special_token_238|>',
'<|reserved_special_token_239|>',
'<|reserved_special_token_240|>',
'<|reserved_special_token_241|>',
'<|reserved_special_token_242|>',
'<|reserved_special_token_243|>',
'<|reserved_special_token_244|>',
'<|reserved_special_token_245|>',
'<|reserved_special_token_246|>',
'<|reserved_special_token_247|>',
'<|reserved_special_token_248|>',
'<|reserved_special_token_249|>',
'<|reserved_special_token_250|>']
</details>
Once these untrained tokens are identified, the average of trained tokens can be calculated by using the sums of embedding values of trained tokens for each feature/column and divided by the number of trained. This is done for both input and output matrices.
Lastly, the problematic token's rows in the 2 embedding matrics are set to the computed mean, thus completing the adjustment.
## Contributors
- [David Xue](https://www.linkedin.com/in/david-xue-uva/), Machine Learning Engineer from [Astronomer](https://astronomer.io)
|
cardiffnlp/twitter-roberta-base-hate | cardiffnlp | "2023-04-19T07:54:22Z" | 1,728 | 14 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | # Twitter-roBERTa-base for Hate Speech Detection
This is a roBERTa-base model trained on ~58M tweets and finetuned for hate speech detection with the TweetEval benchmark.
This model is specialized to detect hate speech against women and immigrants.
**NEW!** We have made available a more recent and robust hate speech detection model here: [https://huggingface.co/cardiffnlp/twitter-roberta-base-hate-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-hate-latest)
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='hate'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) not-hate 0.9168
2) hate 0.0832
```
|
aisquared/dlite-v2-124m | aisquared | "2024-03-28T18:14:33Z" | 1,728 | 6 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:aisquared/databricks-dolly-15k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-15T13:07:17Z" | ---
license: apache-2.0
datasets:
- aisquared/databricks-dolly-15k
language:
- en
library_name: transformers
---
# Model Card for `dlite-v2-124m`
<!-- Provide a quick summary of what the model is/does. -->
AI Squared's `dlite-v2-124m` is a large language
model which is derived from OpenAI's smallest [GPT-2](https://huggingface.co/gpt2) model and fine-tuned on a single GPU on a corpus of 15k records
([Databricks' "Dolly 15k" Dataset](https://huggingface.co/datasets/aisquared/databricks-dolly-15k)) to help it exhibit chat-based capabilities.
Just like [Databricks' Dolly V2 models](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm),
`dlite-v2-124m` (and all other members of the `dlite-v2` family) is licensed for both **research and commercial use.** We are extremely grateful
for the work that Databricks has done to create the `databricks-dolly-15k` dataset, for without it we would not be able to create and release this
model under such an open and permissive license.
While `dlite-v2-124m` is **not a state-of-the-art model**, we believe that the level of interactivity that can be achieved on such a small model that is trained so cheaply
is important to showcase, as it continues to demonstrate that creating powerful AI capabilities may be much more accessible than previously thought.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** AI Squared, Inc.
- **Shared by:** AI Squared, Inc.
- **Model type:** Large Language Model
- **Language(s) (NLP):** EN
- **License:** Apache v2.0
- **Finetuned from model:** GPT-2
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
**`dlite-v2-124m` is not a state-of-the-art language model.** `dlite-v2-124m` is an experimental technology, and as with any experimental technology,
AI Squared urges potential users of this technology to test its capabilities thoroughly before usage.
Furthermore, the model can sometimes exhibit undesired behaviors. Some of these behaviors include,
but are not limited to: factual inaccuracies, biases, offensive responses, toxicity, and hallucinations.
Just as with any other LLM, we advise users of this technology to exercise good judgment when applying this technology.
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
From your terminal, run:
```python
pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"
```
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
found in the model repo [here](https://huggingface.co/aisquared/dlite-v2-124m/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
It is also fine to remove it if there is sufficient memory.
```python
from transformers import pipeline
import torch
generate_text = pipeline(model="aisquared/dlite-v2-124m", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
```
You can then use the pipeline to answer instructions:
```python
res = generate_text("Who was George Washington?")
print(res)
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/aisquared/dlite-v2-124m/blob/main/instruct_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("aisquared/dlite-v2-124m", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("aisquared/dlite-v2-124m", device_map="auto", torch_dtype=torch.bfloat16)
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
```
### Model Performance Metrics
We present the results from various model benchmarks on the EleutherAI LLM Evaluation Harness for all models in the DLite family.
Model results are sorted by mean score, ascending, to provide an ordering. These metrics serve to further show that none of the DLite models are
state of the art, but rather further show that chat-like behaviors in LLMs can be trained almost independent of model size.
| Model | arc_challenge | arc_easy | boolq | hellaswag | openbookqa | piqa | winogrande |
|:--------------|----------------:|-----------:|---------:|------------:|-------------:|---------:|-------------:|
| dlite-v2-124m | 0.199659 | 0.447811 | 0.494801 | 0.291675 | 0.156 | 0.620239 | 0.487766 |
| gpt2 | 0.190273 | 0.438131 | 0.487156 | 0.289185 | 0.164 | 0.628945 | 0.51618 |
| dlite-v1-124m | 0.223549 | 0.462542 | 0.502446 | 0.293268 | 0.17 | 0.622416 | 0.494081 |
| gpt2-medium | 0.215017 | 0.490741 | 0.585933 | 0.333101 | 0.186 | 0.676279 | 0.531176 |
| dlite-v2-355m | 0.251706 | 0.486111 | 0.547401 | 0.344354 | 0.216 | 0.671926 | 0.52723 |
| dlite-v1-355m | 0.234642 | 0.507576 | 0.600306 | 0.338478 | 0.216 | 0.664309 | 0.496448 |
| gpt2-large | 0.216724 | 0.531566 | 0.604893 | 0.363971 | 0.194 | 0.703482 | 0.553275 |
| dlite-v1-774m | 0.250853 | 0.545875 | 0.614985 | 0.375124 | 0.218 | 0.698041 | 0.562747 |
| dlite-v2-774m | 0.269625 | 0.52904 | 0.613761 | 0.395937 | 0.256 | 0.691513 | 0.566693 |
| gpt2-xl | 0.25 | 0.582912 | 0.617737 | 0.400418 | 0.224 | 0.708379 | 0.583268 |
| dlite-v1-1_5b | 0.268771 | 0.588384 | 0.624159 | 0.401414 | 0.226 | 0.708379 | 0.584846 |
| dlite-v2-1_5b | 0.289249 | 0.565657 | 0.601223 | 0.434077 | 0.272 | 0.703482 | 0.588003 |
### Limitations
*DLite is an experimental technology and is not designed for use in any environment without significant testing and safety consideration.
Furthermore, the model can sometimes exhibit undesired behaviors. Some of these behaviors include, but are not limited to: factual
inaccuracies, biases, offensive responses, toxicity, and hallucinations. Just as with any other LLM, we advise users of this technology
to exercise good judgment when applying this technology.*
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_aisquared__dlite-v2-124m)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 25.01 |
| ARC (25-shot) | 23.98 |
| HellaSwag (10-shot) | 31.1 |
| MMLU (5-shot) | 25.29 |
| TruthfulQA (0-shot) | 38.98 |
| Winogrande (5-shot) | 50.43 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 5.29 |
|
tokyotech-llm/Swallow-MS-7b-instruct-v0.1 | tokyotech-llm | "2024-06-29T09:05:34Z" | 1,728 | 12 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-29T15:23:08Z" | ---
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
model_type: mistral
license: apache-2.0
---
# Swallow-MS-7b-v0.1
Our Swallow-MS-7b-v0.1 model has undergone continual pre-training from the Mistral-7B-v0.1, primarily with the addition of Japanese language data.
# Model Release Updates
We are excited to share the release schedule for our latest models:
- **April 26, 2024**: Released the [Swallow-MS-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-instruct-v0.1)
- **March 11, 2024**: Released the [Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1)

This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/).
## Model Details
* **Model type**: Please refer to Mistral technical report for details on the model architecture.
* **Language(s)**: Japanese English
* **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process.
* **Contact**: swallow[at]nlp.c.titech.ac.jp
## Instruct Model Performance
### MT-Bench JA
#### Turn-Wise Performance
We report overall (i.e., average over scores of the first and second turns), first, and second turn scores.
##### Overall
|Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities|
|---|---|---|---|---|---|---|---|---|---|
| Swallow-MS-7b-instruct-v0.1 |0.3411|0.3770|0.4290|0.3454|0.1040|0.2400|0.3677|0.3907|0.4750|
##### First Turn
|Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities|
|---|---|---|---|---|---|---|---|---|---|
| Swallow-MS-7b-instruct-v0.1 |0.3699|0.4880|0.4260|0.3900|0.1080|0.2364|0.3780|0.4500|0.4800|
##### Second Turn
|Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities|
|---|---|---|---|---|---|---|---|---|---|
| Swallow-MS-7b-instruct-v0.1 |0.3130|0.2624|0.4320|0.2996|0.1000|0.2430|0.3564|0.3291|0.4700|
#### Comparison to the past model
We only provide the overall score in this section.
|Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities|
|---|---|---|---|---|---|---|---|---|---|
| Swallow-MS-7b-instruct-v0.1 |0.3411|0.3770|0.4290|0.3454|0.1040|0.2400|0.3677|0.3907|0.4750|
| ELYZA-japanese-Llama-2-7b-fast-instruct |0.2827|0.3289|0.3907|0.2424|0.1480|0.1584|0.3511|0.3053|0.3365|
| calm2-7b-chat |0.3204|0.4657|0.4898|0.1837|0.1005|0.1414|0.3927|0.3601|0.4293|
| calm2-7b-chat-dpo-experimental |0.3493|0.5312|0.5237|0.1857|0.1000|0.1813|0.3355|0.4320|0.5051|
| RakutenAI-7B-instruct |0.2994|0.3623|0.3711|0.3333|0.1763|0.1581|0.4215|0.2824|0.2901|
| RakutenAI-7B-chat |0.3667|0.4229|0.4644|0.3990|0.2161|0.2390|0.3416|0.3904|0.4601|
## Evaluation Benchmarks
### MT-Bench JA
We used [Japanese MT-Bench](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question) to assess the instruction-following capabilities of models.
We utilized the following settings:
- Implemantation: FastChat [Zheng+, 2023] (commit #e86e70d0)
- Question: [Nejumi LLM-Leaderboard NEO, mtbench_ja_question_v3](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question/v3)
- Reference Answer: [Nejumi LLM-Leaderboard NEO, mtbench_ja_referenceanswer_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_referenceanswer/v1)
- Prompt for Judge: [Nejumi LLM-Lederboard NEO, mtbench_ja_prompt_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_prompt/v1)
- Judge: `gpt-4-1106-preview`
- Scoring: Absolute scale normalized to a 0-1 range, averaged over five runs.
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install -r requirements.txt
```
### Instruction format Ver0.1
This format must be adhered to strictly, as deviations may result in less optimal outputs from the model.
The template used to construct a prompt for the Instruct model is specified as follows:
```
<s>[INST] <<SYS>>\n{SYSTEM_PROMPT}\n<</SYS>>\n\n{USER_MESSAGE_1} [/INST] {BOT_MESSAGE_1}</s>[INST] {USER_MESSAGE_2} [/INST]
```
Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings.
For the "{SYSTEM_PROMPT}" part, We recommend using "あなたは誠実で優秀な日本人のアシスタントです。"
For the "{USER_MESSAGE_1}" part, We recommend using {instruction}\n{input}
In other words, We recommend the following:
```
<s>[INST] <<SYS>>\nあなたは誠実で優秀な日本人のアシスタントです。\n<</SYS>>\n\n{instruction1}\n{input1} [/INST] {BOT_MESSAGE_1}</s>[INST] {instruction2}\n{input2} [/INST]
```
### Use the instruct model Ver0.1
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tokyotech-llm/Swallow-MS-7b-instruct-v0.1"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
device = "cuda"
messages = [
{"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"},
{"role": "user", "content": "東京工業大学の主なキャンパスについて教えてください"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=128, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Training Datasets
### Instruction Tuning Ver0.1
The following datasets were used for the instruction tuning.
- [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja) was used, where human utterances are included but the responses are not used. Instead, the responses were generated using the [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model.
- [OpenAssistant Conversations Dataset 21k Ja](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja)
- [OpenAssistant Conversations Dataset 21k En](https://huggingface.co/datasets/llm-jp/oasst1-21k-en)
- [Databricks Dolly 15k Ja](https://huggingface.co/datasets/llm-jp/databricks-dolly-15k-ja)
- [Databricks Dolly 15k En](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
Please note that some of the data had issues with quality or format, so not all of it was used.
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Acknowledgements
We thank Mistral AI for releasing Mistral 7B v0.1 under an open license for others to build on.
Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
## License
apache-2.0
## Authors
Here are the team members:
- From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
- [Hiroki Iida](https://meshidenn.github.io/)
- [Mengsay Loem](https://loem-ms.github.io/)
- [Shota Hirai](https://huggingface.co/Kotemo428)
- [Kakeru Hattori](https://aya-se.vercel.app/)
- [Masanari Ohi](https://twitter.com/stjohn2007)
- From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
- [Rio Yokota](https://twitter.com/rioyokota)
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
- [Takumi Okamoto](https://www.linkedin.com/in/takumi-okamoto)
- [Ishida Shigeki](https://www.wantedly.com/id/reborn27)
|
MaziyarPanahi/Superevolution-GGUF | MaziyarPanahi | "2024-06-15T15:30:39Z" | 1,728 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:mergekit-community/TopEvolution",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/Superevolution"
] | text-generation | "2024-06-15T15:08:14Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- conversational
- base_model:NousResearch/Hermes-2-Pro-Mistral-7B
- base_model:mergekit-community/TopEvolution
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Superevolution-GGUF
base_model: mergekit-community/Superevolution
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Superevolution-GGUF](https://huggingface.co/MaziyarPanahi/Superevolution-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/Superevolution](https://huggingface.co/mergekit-community/Superevolution)
## Description
[MaziyarPanahi/Superevolution-GGUF](https://huggingface.co/MaziyarPanahi/Superevolution-GGUF) contains GGUF format model files for [mergekit-community/Superevolution](https://huggingface.co/mergekit-community/Superevolution).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
HooshvareLab/gpt2-fa | HooshvareLab | "2021-05-21T10:51:23Z" | 1,727 | 10 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-03-02T23:29:04Z" | ---
language: fa
license: apache-2.0
widget:
- text: "در یک اتفاق شگفت انگیز، پژوهشگران"
- text: "گرفتگی بینی در کودکان و بهخصوص نوزادان باعث میشود"
- text: "امیدواریم نوروز امسال سالی"
---
# ParsGPT2
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ParsGPT2,
author = {Hooshvare Team},
title = {ParsGPT2 the Persian version of GPT2},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/hooshvare/parsgpt}},
}
```
## Questions?
Post a Github issue on the [ParsGPT2 Issues](https://github.com/hooshvare/parsgpt/issues) repo. |
BreadAi/gpt-Youtube | BreadAi | "2023-12-06T12:44:54Z" | 1,727 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"dataset:breadlicker45/youtube-comments-180k",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-02-23T02:47:05Z" | ---
datasets:
- breadlicker45/youtube-comments-180k
pipeline_tag: text-generation
---
this is trained on 180K YouTube comments.
this is trained for 100k steps.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BreadAi__gpt-Youtube)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 24.86 |
| ARC (25-shot) | 23.29 |
| HellaSwag (10-shot) | 26.34 |
| MMLU (5-shot) | 23.54 |
| TruthfulQA (0-shot) | 48.63 |
| Winogrande (5-shot) | 48.93 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 3.32 |
|
Rohithqwerty/model2_ | Rohithqwerty | "2024-06-21T06:01:38Z" | 1,727 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-21T05:52:05Z" | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** Rohithqwerty
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
meta-math/MetaMath-Llemma-7B | meta-math | "2023-12-21T01:37:41Z" | 1,726 | 13 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:meta-math/MetaMathQA",
"arxiv:2309.12284",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-19T22:58:29Z" | ---
license: apache-2.0
datasets:
- meta-math/MetaMathQA
---
see our paper in https://arxiv.org/abs/2309.12284
View the project page:
https://meta-math.github.io/
## Note
All MetaMathQA data are augmented from the training sets of GSM8K and MATH.
<span style="color:red"><b>None of the augmented data is from the testing set.</b></span>
You can check the `original_question` in `meta-math/MetaMathQA`, each item is from the GSM8K or MATH train set.
## Model Details
MetaMath-Llemma-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Llemma-7B model. It is glad to see using MetaMathQA datasets and change the base model from llama-2-7B to Llemma-7B can boost the MATH performance from 19.8 to **30.0**.
## Installation
```
pip install transformers==4.35.0
pip install torch==2.0.1
pip install sentencepiece==0.1.99
pip install tokenizers==0.13.3
pip install accelerate==0.21.0
pip install bitsandbytes==0.40.0
pip install vllm
pip install fraction
pip install protobuf
```
## Model Usage
prompting template:
'''
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
'''
where you need to use your query question to replace the {instruction}
## Experiments
| Model | GSM8k Pass@1 | MATH Pass@1 |
|---------------------|--------------|-------------|
| MPT-7B | 6.8 | 3.0 |
| Falcon-7B | 6.8 | 2.3 |
| LLaMA-1-7B | 11.0 | 2.9 |
| LLaMA-2-7B | 14.6 | 2.5 |
| MPT-30B | 15.2 | 3.1 |
| LLaMA-1-13B | 17.8 | 3.9 |
| GPT-Neo-2.7B | 19.5 | -- |
| Falcon-40B | 19.6 | 2.5 |
| Baichuan-chat-13B | 23.9 | -- |
| Vicuna-v1.3-13B | 27.6 | -- |
| LLaMA-2-13B | 28.7 | 3.9 |
| InternLM-7B | 31.2 | -- |
| ChatGLM-2-6B | 32.4 | -- |
| GPT-J-6B | 34.9 | -- |
| LLaMA-1-33B | 35.6 | 3.9 |
| LLaMA-2-34B | 42.2 | 6.24 |
| RFT-7B | 50.3 | -- |
| LLaMA-1-65B | 50.9 | 10.6 |
| Qwen-7B | 51.6 | -- |
| WizardMath-7B | 54.9 | 10.7 |
| LLaMA-2-70B | 56.8 | 13.5 |
| WizardMath-13B | 63.9 | 14.0 |
| MAmmoTH-7B (COT) | 50.5 | 10.4 |
| MAmmoTH-7B (POT+COT)| 53.6 | 31.5 |
| Arithmo-Mistral-7B | 74.7 | 25.3 |
| MetaMath-7B | 66.5 | 19.8 |
| MetaMath-13B | 72.3 | 22.4 |
| 🔥 **MetaMath-Llemma-7B** | **69.2** | **30.0** |
| 🔥 **MetaMath-Mistral-7B** | **77.7** | **28.2** |
## Citation
```bibtex
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
```
```bibtex
@article{azerbayev2023llemma,
title={Llemma: An open language model for mathematics},
author={Azerbayev, Zhangir and Schoelkopf, Hailey and Paster, Keiran and Santos, Marco Dos and McAleer, Stephen and Jiang, Albert Q and Deng, Jia and Biderman, Stella and Welleck, Sean},
journal={arXiv preprint arXiv:2310.10631},
year={2023}
}
``` |
eachadea/vicuna-13b-1.1 | eachadea | "2023-05-02T09:07:12Z" | 1,725 | 135 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-13T01:47:56Z" | ---
license: apache-2.0
inference: false
---
**delta v1.1 merge**
<br>
<br>
# Vicuna Model Card
## Model details
**Model type:**
Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
Vicuna was trained between March 2023 and April 2023.
**Organizations developing the model:**
The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.
**Paper or resources for more information:**
https://vicuna.lmsys.org/
**License:**
Apache License 2.0
**Where to send questions or comments about the model:**
https://github.com/lm-sys/FastChat/issues
## Intended use
**Primary intended uses:**
The primary use of Vicuna is research on large language models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## Training dataset
70K conversations collected from ShareGPT.com.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.
## Major updates of weights v1.1
- Refactor the tokenization and separator. In Vicuna v1.1, the separator has been changed from `"###"` to the EOS token `"</s>"`. This change makes it easier to determine the generation stop criteria and enables better compatibility with other libraries.
- Fix the supervised fine-tuning loss computation for better model quality. |
TheBloke/Stheno-L2-13B-GGUF | TheBloke | "2023-09-27T12:47:03Z" | 1,725 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"base_model:Sao10K/Stheno-L2-13B",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | "2023-09-01T22:22:44Z" | ---
language:
- en
license: llama2
model_name: Stheno L2 13B
base_model: Sao10K/Stheno-L2-13B
inference: false
model_creator: Sao10K
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Stheno L2 13B - GGUF
- Model creator: [Sao10K](https://huggingface.co/Sao10K)
- Original model: [Stheno L2 13B](https://huggingface.co/Sao10K/Stheno-L2-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Sao10K's Stheno L2 13B](https://huggingface.co/Sao10K/Stheno-L2-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Stheno-L2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Stheno-L2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF)
* [Sao10K's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Sao10K/Stheno-L2-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [stheno-l2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF/blob/main/stheno-l2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [stheno-l2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF/blob/main/stheno-l2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [stheno-l2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF/blob/main/stheno-l2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [stheno-l2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF/blob/main/stheno-l2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [stheno-l2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF/blob/main/stheno-l2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [stheno-l2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF/blob/main/stheno-l2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [stheno-l2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF/blob/main/stheno-l2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [stheno-l2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF/blob/main/stheno-l2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [stheno-l2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF/blob/main/stheno-l2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [stheno-l2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF/blob/main/stheno-l2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [stheno-l2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF/blob/main/stheno-l2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [stheno-l2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF/blob/main/stheno-l2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Stheno-L2-13B-GGUF and below it, a specific filename to download, such as: stheno-l2-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Stheno-L2-13B-GGUF stheno-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Stheno-L2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Stheno-L2-13B-GGUF stheno-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m stheno-l2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Stheno-L2-13B-GGUF", model_file="stheno-l2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Sao10K's Stheno L2 13B
<img src="https://w.forfun.com/fetch/cb/cba2205390e517bea1ea60ca0b491af4.jpeg" style="width: 70%; min-width: 300px; display: block; margin: auto;">
An experimental merging of Several Models using two various methods, [Ties-Merge](https://github.com/cg123/ties-merge) and [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient)
I plan for this to be the base of my Model with my own [Stheno: ERP-Based LORA] merged in, some time in the future.
Stheno:
<br>Gradient Merge of Stheno-P1 & Stheno-P2.
SISTER MODEL HERE: [Stheno-Inverted-L2-13B](https://huggingface.co/Sao10K/Stheno-Inverted-L2-13B)
Quants courtesy of TheBloke!
<br>[GPTQ](https://huggingface.co/TheBloke/Stheno-L2-13B-GPTQ)
<br>[GGUF](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF)
<br>[GGML](https://huggingface.co/TheBloke/Stheno-L2-13B-GGML)
Test Checklist:
<br>Censorship - Fairly Uncensored
<br>Writing - Good Prose, Fairly Descriptive
<br>NSFW - Yes
<br>IQ Level - Pretty Smart
<br>Formatting - Proper Formatting with Examples
Stheno-P1 [Ties-Merge]
<br>-----[elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2)
<br>-----[jondurbin/airoboros-l2-13b-2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1)
<br>-----[NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)+[nRuaif/Kimiko-v2 **LORA**](https://huggingface.co/nRuaif/Kimiko-v2-13B)
Stheno-P2 [Ties-Merge]
<br>-----[CalderaAI/13B-Legerdemain-L2](https://huggingface.co/CalderaAI/13B-Legerdemain-L2)+[lemonilia/limarp-llama2-v2 **LORA**](https://huggingface.co/lemonilia/limarp-llama2-v2)
<br>-----[ehartford/WizardLM-1.0-Uncensored-Llama2-13b](https://huggingface.co/ehartford/WizardLM-1.0-Uncensored-Llama2-13b)
<br>-----[Henk717/spring-dragon](https://huggingface.co/Henk717/spring-dragon)
Most formats could work, but my tests have all been done in Alpaca format and it works well.
```
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
Below is the Illustration for the Final Merge:

Once Again, thanks to [Chargoddard](https://huggingface.co/chargoddard) for his amazing and simple [ties-merge](https://github.com/cg123/ties-merge) script, and [Gryphe](https://huggingface.co/Gryphe) for their great [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) script.
Thanks to the original model creators too!
```
Art by wada_kazu / わだかず (pixiv page private?)
```
<!-- original-model-card end -->
|
MaziyarPanahi/TopStarClaudioArena-GGUF | MaziyarPanahi | "2024-06-15T22:13:36Z" | 1,725 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:mergekit-community/mergekit-slerp-ebgdloh",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/TopStarClaudioArena"
] | text-generation | "2024-06-15T21:52:22Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- base_model:NousResearch/Hermes-2-Theta-Llama-3-8B
- base_model:mergekit-community/mergekit-slerp-ebgdloh
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: TopStarClaudioArena-GGUF
base_model: mergekit-community/TopStarClaudioArena
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/TopStarClaudioArena-GGUF](https://huggingface.co/MaziyarPanahi/TopStarClaudioArena-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/TopStarClaudioArena](https://huggingface.co/mergekit-community/TopStarClaudioArena)
## Description
[MaziyarPanahi/TopStarClaudioArena-GGUF](https://huggingface.co/MaziyarPanahi/TopStarClaudioArena-GGUF) contains GGUF format model files for [mergekit-community/TopStarClaudioArena](https://huggingface.co/mergekit-community/TopStarClaudioArena).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
nvidia/groupvit-gcc-yfcc | nvidia | "2022-09-26T13:54:38Z" | 1,724 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"groupvit",
"feature-extraction",
"vision",
"arxiv:2202.11094",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-06-21T08:48:32Z" | ---
tags:
- vision
---
# Model Card: GroupViT
This checkpoint is uploaded by Jiarui Xu.
## Model Details
The GroupViT model was proposed in [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
Inspired by [CLIP](clip), GroupViT is a vision-language model that can perform zero-shot semantic segmentation on any given vocabulary categories.
### Model Date
June 2022
### Abstract
Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision.
### Documents
- [GroupViT Paper](https://arxiv.org/abs/2202.11094)
### Use with Transformers
```python
from PIL import Image
import requests
from transformers import AutoProcessor, GroupViTModel
model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
## Data
The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/groupvit.html#).
### BibTeX entry and citation info
```bibtex
@article{xu2022groupvit,
author = {Xu, Jiarui and De Mello, Shalini and Liu, Sifei and Byeon, Wonmin and Breuel, Thomas and Kautz, Jan and Wang, Xiaolong},
title = {GroupViT: Semantic Segmentation Emerges from Text Supervision},
journal = {arXiv preprint arXiv:2202.11094},
year = {2022},
}
```
|
Helsinki-NLP/opus-mt-en-cs | Helsinki-NLP | "2023-08-16T11:29:17Z" | 1,723 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"cs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-cs
* source languages: en
* target languages: cs
* OPUS readme: [en-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-cs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.en.cs | 22.8 | 0.507 |
| news-test2008.en.cs | 20.7 | 0.485 |
| newstest2009.en.cs | 21.8 | 0.500 |
| newstest2010.en.cs | 22.1 | 0.505 |
| newstest2011.en.cs | 23.2 | 0.507 |
| newstest2012.en.cs | 20.8 | 0.482 |
| newstest2013.en.cs | 24.7 | 0.514 |
| newstest2015-encs.en.cs | 24.9 | 0.527 |
| newstest2016-encs.en.cs | 26.7 | 0.540 |
| newstest2017-encs.en.cs | 22.7 | 0.503 |
| newstest2018-encs.en.cs | 22.9 | 0.504 |
| newstest2019-encs.en.cs | 24.9 | 0.518 |
| Tatoeba.en.cs | 46.1 | 0.647 |
|
togethercomputer/RedPajama-INCITE-7B-Instruct | togethercomputer | "2023-08-09T18:01:27Z" | 1,723 | 104 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:togethercomputer/RedPajama-Data-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-05T05:28:20Z" | ---
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
- togethercomputer/RedPajama-Data-Instruct
widget:
- text: |-
Label the sentences as either 'positive', 'negative', 'mixed', or 'neutral':
Sentence: I can say that there isn't anything I would change.
Label: positive
Sentence: I'm not sure about this.
Label: neutral
Sentence: I liked some parts but I didn't like other parts.
Label: mixed
Sentence: I think the background image could have been better.
Label: negative
Sentence: I really like it.
Label:
example_title: Sentiment Analysis
- text: |-
Please answer the following question:
Question: What is the capital of Canada?
Answer: Ottawa
Question: What is the currency of Switzerland?
Answer: Swiss franc
Question: In which country is Wisconsin located?
Answer:
example_title: Question Answering
- text: >-
Given a news article, classify its topic.
Possible labels: 1. World 2. Sports 3. Business 4. Sci/Tech
Article: A nearby star thought to harbor comets and asteroids now appears to
be home to planets, too.
Label: Sci/Tech
Article: Soaring crude prices plus worries about the economy and the outlook
for earnings are expected to hang over the stock market next week during the
depth of the summer doldrums.
Label: Business
Article: Murtagh a stickler for success Northeastern field hockey coach
Cheryl Murtagh doesn't want the glare of the spotlight that shines on her to
detract from a team that has been the America East champion for the past
three years and has been to the NCAA tournament 13 times.
Label::
example_title: Topic Classification
- text: |-
Paraphrase the given sentence into a different sentence.
Input: Can you recommend some upscale restaurants in New York?
Output: What upscale restaurants do you recommend in New York?
Input: What are the famous places we should not miss in Paris?
Output: Recommend some of the best places to visit in Paris?
Input: Could you recommend some hotels that have cheap price in Zurich?
Output:
example_title: Paraphrasing
- text: >-
Given a review from Amazon's food products, the task is to generate a short
summary of the given review in the input.
Input: I have bought several of the Vitality canned dog food products and
have found them all to be of good quality. The product looks more like a
stew than a processed meat and it smells better. My Labrador is finicky and
she appreciates this product better than most.
Output: Good Quality Dog Food
Input: Product arrived labeled as Jumbo Salted Peanuts...the peanuts were
actually small sized unsalted. Not sure if this was an error or if the
vendor intended to represent the product as 'Jumbo'.
Output: Not as Advertised
Input: My toddler loves this game to a point where he asks for it. That's a
big thing for me. Secondly, no glitching unlike one of their competitors
(PlayShifu). Any tech I don’t have to reach out to support for help is a
good tech for me. I even enjoy some of the games and activities in this.
Overall, this is a product that shows that the developers took their time
and made sure people would not be asking for refund. I’ve become bias
regarding this product and honestly I look forward to buying more of this
company’s stuff. Please keep up the great work.
Output:
example_title: Text Summarization
- text: |-
Identify which sense of a word is meant in a given context.
Context: The river overflowed the bank.
Word: bank
Sense: river bank
Context: A mouse takes much more room than a trackball.
Word: mouse
Sense: computer mouse
Context: The bank will not be accepting cash on Saturdays.
Word: bank
Sense: commercial (finance) banks
Context: Bill killed the project
Word: kill
Sense:
example_title: Word Sense Disambiguation
- text: >-
Given a pair of sentences, choose whether the two sentences agree
(entailment)/disagree (contradiction) with each other.
Possible labels: 1. entailment 2. contradiction
Sentence 1: The skier was on the edge of the ramp. Sentence 2: The skier was
dressed in winter clothes.
Label: entailment
Sentence 1: The boy skated down the staircase railing. Sentence 2: The boy
is a newbie skater.
Label: contradiction
Sentence 1: Two middle-aged people stand by a golf hole. Sentence 2: A
couple riding in a golf cart.
Label:
example_title: Natural Language Inference
inference:
parameters:
temperature: 0.7
top_p: 0.7
top_k: 50
max_new_tokens: 128
---
# RedPajama-INCITE-7B-Instruct
RedPajama-INCITE-7B-Instruct was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
The model was fine-tuned for few-shot applications on the data of [GPT-JT](https://huggingface.co/togethercomputer/GPT-JT-6B-v1), with exclusion of tasks that overlap with the HELM core scenarios.
- Base Model: [RedPajama-INCITE-7B-Base](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base)
- Instruction-tuned Version: [RedPajama-INCITE-7B-Instruct](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct)
- Chat Version: [RedPajama-INCITE-7B-Chat](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Chat)
## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 6.9B parameter pretrained language model.
# Quick Start
Please note that the model requires `transformers` version >= 4.25.1.
## GPU Inference
This requires a GPU with 16GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Instruct")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Instruct", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```
## GPU Inference in Int8
This requires a GPU with 12GB memory.
To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
```bash
pip install accelerate
pip install bitsandbytes
```
Then you can run inference with int8 as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Instruct")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Instruct", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```
## CPU Inference
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Instruct")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Instruct", torch_dtype=torch.bfloat16)
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```
Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
# Uses
## Direct Use
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
#### Out-of-Scope Use
RedPajama-INCITE-7B-Instruct is a language model and may not perform well for other use cases outside of its intended scope.
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
It is important to consider the limitations of the model and to only use it for its intended purpose.
#### Misuse and Malicious Use
RedPajama-INCITE-7B-Instruct is designed for language modeling.
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating fake news, misinformation, or propaganda
- Promoting hate speech, discrimination, or violence against individuals or groups
- Impersonating individuals or organizations without their consent
- Engaging in cyberbullying or harassment
- Defamatory content
- Spamming or scamming
- Sharing confidential or sensitive information without proper authorization
- Violating the terms of use of the model or the data used to train it
- Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
## Limitations
RedPajama-INCITE-7B-Instruct, like other language models, has limitations that should be taken into consideration.
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
## Training
**Training Data**
Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
**Training Procedure**
- **Hardware:** 8 A100
- **Optimizer:** Adam
- **Gradient Accumulations**: 1
- **Num of Tokens:** 1B tokens
- **Learning rate:** 1e-5
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4) |
digiplay/Realisian_v6 | digiplay | "2024-06-28T11:46:11Z" | 1,723 | 2 | diffusers | [
"diffusers",
"safetensors",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-06-21T16:11:34Z" | ---
license: other
---
Model info:
https://civitai.com/models/47130/realisian
|
KoboldAI/fairseq-dense-125M | KoboldAI | "2023-11-18T11:53:11Z" | 1,722 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xglm",
"text-generation",
"en",
"arxiv:2112.10684",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:04Z" | ---
language: en
---
This is a Hugging Face transformers-compatible conversion of the original dense 125M-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-125M)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 26.0 |
| ARC (25-shot) | 24.06 |
| HellaSwag (10-shot) | 34.14 |
| MMLU (5-shot) | 23.98 |
| TruthfulQA (0-shot) | 43.72 |
| Winogrande (5-shot) | 50.59 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 5.5 |
|
jplu/tf-xlm-roberta-base | jplu | "2020-12-11T21:48:00Z" | 1,722 | 1 | transformers | [
"transformers",
"tf",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | # Tensorflow XLM-RoBERTa
In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow.
## XLM-RoBERTa
[XLM-RoBERTa](https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/) is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks.
## Model Weights
| Model | Downloads
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `jplu/tf-xlm-roberta-base` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/tf_model.h5)
| `jplu/tf-xlm-roberta-large` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/tf_model.h5)
## Usage
With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like:
```python
from transformers import TFXLMRobertaModel
model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-base")
```
Or
```
model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-large")
```
## Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/jplu).
## Acknowledgments
Thanks to all the Huggingface team for the support and their amazing library!
|
lxyuan/span-marker-bert-base-multilingual-uncased-multinerd | lxyuan | "2023-12-21T02:04:40Z" | 1,722 | 15 | span-marker | [
"span-marker",
"pytorch",
"generated_from_trainer",
"ner",
"named-entity-recognition",
"token-classification",
"de",
"en",
"es",
"fr",
"it",
"nl",
"pl",
"pt",
"ru",
"zh",
"dataset:Babelscape/multinerd",
"base_model:bert-base-multilingual-uncased",
"license:cc-by-nc-sa-4.0",
"model-index",
"region:us"
] | token-classification | "2023-08-14T09:34:03Z" | ---
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
- zh
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
- ner
- named-entity-recognition
- span-marker
datasets:
- Babelscape/multinerd
metrics:
- precision
- recall
- f1
pipeline_tag: token-classification
widget:
- text: amelia earthart flog mit ihrer einmotorigen lockheed vega 5b über den atlantik
nach paris.
example_title: German
- text: amelia earhart flew her single engine lockheed vega 5b across the atlantic
to paris.
example_title: English
- text: amelia earthart voló su lockheed vega 5b monomotor a través del océano atlántico
hasta parís.
example_title: Spanish
- text: amelia earthart a fait voler son monomoteur lockheed vega 5b à travers l'ocean
atlantique jusqu'à paris.
example_title: French
- text: amelia earhart ha volato con il suo monomotore lockheed vega 5b attraverso
l'atlantico fino a parigi.
example_title: Italian
- text: amelia earthart vloog met haar één-motorige lockheed vega 5b over de atlantische
oceaan naar parijs.
example_title: Dutch
- text: amelia earthart przeleciała swoim jednosilnikowym samolotem lockheed vega
5b przez ocean atlantycki do paryża.
example_title: Polish
- text: amelia earhart voou em seu monomotor lockheed vega 5b através do atlântico
para paris.
example_title: Portuguese
- text: амелия эртхарт перелетела на своем одномоторном самолете lockheed vega 5b
через атлантический океан в париж.
example_title: Russian
- text: amelia earthart flaug eins hreyfils lockheed vega 5b yfir atlantshafið til
parísar.
example_title: Icelandic
- text: η amelia earthart πέταξε το μονοκινητήριο lockheed vega 5b της πέρα από
τον ατλαντικό ωκεανό στο παρίσι.
example_title: Greek
- text: amelia earhartová přeletěla se svým jednomotorovým lockheed vega 5b přes atlantik
do paříže.
example_title: Czech
- text: amelia earhart lensi yksimoottorisella lockheed vega 5b:llä atlantin yli pariisiin.
example_title: Finnish
- text: amelia earhart fløj med sin enmotoriske lockheed vega 5b over atlanten til
paris.
example_title: Danish
- text: amelia earhart flög sin enmotoriga lockheed vega 5b över atlanten till paris.
example_title: Swedish
- text: amelia earhart fløy sin enmotoriske lockheed vega 5b over atlanterhavet til
paris.
example_title: Norwegian
- text: amelia earhart și-a zburat cu un singur motor lockheed vega 5b peste atlantic
până la paris.
example_title: Romanian
- text: amelia earhart menerbangkan mesin tunggal lockheed vega 5b melintasi atlantik
ke paris.
example_title: Indonesian
- text: амелія эрхарт пераляцела на сваім аднаматорным lockheed vega 5b праз атлантыку
ў парыж.
example_title: Belarusian
- text: амелія ергарт перелетіла на своєму одномоторному літаку lockheed vega 5b через
атлантику до парижа.
example_title: Ukrainian
- text: amelia earhart preletjela je svojim jednomotornim zrakoplovom lockheed vega
5b preko atlantika do pariza.
example_title: Croatian
- text: amelia earhart lendas oma ühemootoriga lockheed vega 5b üle atlandi ookeani
pariisi.
example_title: Estonian
base_model: bert-base-multilingual-uncased
model-index:
- name: span-marker-bert-base-multilingual-uncased-multinerd
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: MultiNERD
type: Babelscape/multinerd
split: test
revision: 2814b78e7af4b5a1f1886fe7ad49632de4d9dd25
metrics:
- type: f1
value: 0.9187
name: F1
- type: precision
value: 0.9202
name: Precision
- type: recall
value: 0.9172
name: Recall
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# span-marker-bert-base-multilingual-uncased-multinerd
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an [Babelscape/multinerd](https://huggingface.co/datasets/Babelscape/multinerd) dataset.
Is your data always capitalized correctly? Then consider using the cased variant of this model instead for better performance:
[lxyuan/span-marker-bert-base-multilingual-cased-multinerd](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-cased-multinerd).
This model achieves the following results on the evaluation set:
- Loss: 0.0054
- Overall Precision: 0.9275
- Overall Recall: 0.9147
- Overall F1: 0.9210
- Overall Accuracy: 0.9842
Test set results:
- test_loss: 0.0058621917851269245,
- test_overall_accuracy: 0.9831472809849865,
- test_overall_f1: 0.9187844693592546,
- test_overall_precision: 0.9202802342397876,
- test_overall_recall: 0.9172935588307115,
- test_runtime: 2716.7472,
- test_samples_per_second: 149.141,
- test_steps_per_second: 4.661,
Note:
This is a replication of Tom's work. In this work, we used slightly different hyperparameters: `epochs=3` and `gradient_accumulation_steps=2`.
We also switched to the uncased [bert model](https://huggingface.co/bert-base-multilingual-uncased) to see if an uncased encoder model would perform better for commonly lowercased entities like, such as food. Please check the discussion [here](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-cased-multinerd/discussions/1).
Refer to the official [model page](https://huggingface.co/tomaarsen/span-marker-mbert-base-multinerd) to review their results and training script.
## Results:
| **Language** | **Precision** | **Recall** | **F1** |
|--------------|---------------|------------|-----------|
| **all** | 92.03 | 91.73 | **91.88** |
| **de** | 94.96 | 94.87 | **94.91** |
| **en** | 93.69 | 93.75 | **93.72** |
| **es** | 91.19 | 90.69 | **90.94** |
| **fr** | 91.36 | 90.74 | **91.05** |
| **it** | 90.51 | 92.57 | **91.53** |
| **nl** | 93.23 | 92.13 | **92.67** |
| **pl** | 92.17 | 91.59 | **91.88** |
| **pt** | 92.70 | 91.59 | **92.14** |
| **ru** | 92.31 | 92.36 | **92.34** |
| **zh** | 88.91 | 87.53 | **88.22** |
Below is a combined table that compares the results of the cased and uncased models for each language:
| **Language** | **Metric** | **Cased** | **Uncased** |
|--------------|--------------|-----------|-------------|
| **all** | Precision | 92.42 | 92.03 |
| | Recall | 92.81 | 91.73 |
| | F1 | **92.61** | 91.88 |
| **de** | Precision | 95.03 | 94.96 |
| | Recall | 95.07 | 94.87 |
| | F1 | **95.05** | 94.91 |
| **en** | Precision | 95.00 | 93.69 |
| | Recall | 95.40 | 93.75 |
| | F1 | **95.20** | 93.72 |
| **es** | Precision | 92.05 | 91.19 |
| | Recall | 91.37 | 90.69 |
| | F1 | **91.71** | 90.94 |
| **fr** | Precision | 92.37 | 91.36 |
| | Recall | 91.41 | 90.74 |
| | F1 | **91.89** | 91.05 |
| **it** | Precision | 91.45 | 90.51 |
| | Recall | 93.15 | 92.57 |
| | F1 | **92.29** | 91.53 |
| **nl** | Precision | 93.85 | 93.23 |
| | Recall | 92.98 | 92.13 |
| | F1 | **93.41** | 92.67 |
| **pl** | Precision | 93.13 | 92.17 |
| | Recall | 92.66 | 91.59 |
| | F1 | **92.89** | 91.88 |
| **pt** | Precision | 93.60 | 92.70 |
| | Recall | 92.50 | 91.59 |
| | F1 | **93.05** | 92.14 |
| **ru** | Precision | 93.25 | 92.31 |
| | Recall | 93.32 | 92.36 |
| | F1 | **93.29** | 92.34 |
| **zh** | Precision | 89.47 | 88.91 |
| | Recall | 88.40 | 87.53 |
| | F1 | **88.93** | 88.22 |
Short discussion:
Upon examining the results, one might conclude that the cased version of the model is better than the uncased version,
as it outperforms the latter across all languages. However, I recommend that users test both models on their specific
datasets (or domains) to determine which one actually delivers better performance. My reasoning for this suggestion
stems from a brief comparison I conducted on the FOOD (food) entities. I found that both cased and uncased models are
sensitive to the full stop punctuation mark. We direct readers to the section: Quick Comparison on FOOD Entities.
## Label set
| Class | Description | Examples |
|-------|-------------|----------|
| **PER (person)** | People | Ray Charles, Jessica Alba, Leonardo DiCaprio, Roger Federer, Anna Massey. |
| **ORG (organization)** | Associations, companies, agencies, institutions, nationalities and religious or political groups | University of Edinburgh, San Francisco Giants, Google, Democratic Party. |
| **LOC (location)** | Physical locations (e.g. mountains, bodies of water), geopolitical entities (e.g. cities, states), and facilities (e.g. bridges, buildings, airports). | Rome, Lake Paiku, Chrysler Building, Mount Rushmore, Mississippi River. |
| **ANIM (animal)** | Breeds of dogs, cats and other animals, including their scientific names. | Maine Coon, African Wild Dog, Great White Shark, New Zealand Bellbird. |
| **BIO (biological)** | Genus of fungus, bacteria and protoctists, families of viruses, and other biological entities. | Herpes Simplex Virus, Escherichia Coli, Salmonella, Bacillus Anthracis. |
| **CEL (celestial)** | Planets, stars, asteroids, comets, nebulae, galaxies and other astronomical objects. | Sun, Neptune, Asteroid 187 Lamberta, Proxima Centauri, V838 Monocerotis. |
| **DIS (disease)** | Physical, mental, infectious, non-infectious, deficiency, inherited, degenerative, social and self-inflicted diseases. | Alzheimer’s Disease, Cystic Fibrosis, Dilated Cardiomyopathy, Arthritis. |
| **EVE (event)** | Sport events, battles, wars and other events. | American Civil War, 2003 Wimbledon Championships, Cannes Film Festival. |
| **FOOD (food)** | Foods and drinks. | Carbonara, Sangiovese, Cheddar Beer Fondue, Pizza Margherita. |
| **INST (instrument)** | Technological instruments, mechanical instruments, musical instruments, and other tools. | Spitzer Space Telescope, Commodore 64, Skype, Apple Watch, Fender Stratocaster. |
| **MEDIA (media)** | Titles of films, books, magazines, songs and albums, fictional characters and languages. | Forbes, American Psycho, Kiss Me Once, Twin Peaks, Disney Adventures. |
| **PLANT (plant)** | Types of trees, flowers, and other plants, including their scientific names. | Salix, Quercus Petraea, Douglas Fir, Forsythia, Artemisia Maritima. |
| **MYTH (mythological)** | Mythological and religious entities. | Apollo, Persephone, Aphrodite, Saint Peter, Pope Gregory I, Hercules. |
| **TIME (time)** | Specific and well-defined time intervals, such as eras, historical periods, centuries, years and important days. No months and days of the week. | Renaissance, Middle Ages, Christmas, Great Depression, 17th Century, 2012. |
| **VEHI (vehicle)** | Cars, motorcycles and other vehicles. | Ferrari Testarossa, Suzuki Jimny, Honda CR-X, Boeing 747, Fairey Fulmar. |
## Inference Example
```python
# install span_marker
(env)$ pip install span_marker
from span_marker import SpanMarkerModel
model = SpanMarkerModel.from_pretrained("lxyuan/span-marker-bert-base-multilingual-uncased-multinerd")
description = "Singapore is renowned for its hawker centers offering dishes \
like Hainanese chicken rice and laksa, while Malaysia boasts dishes such as \
nasi lemak and rendang, reflecting its rich culinary heritage."
entities = model.predict(description)
entities
>>>
[
{'span': 'Singapore', 'label': 'LOC', 'score': 0.9999247789382935, 'char_start_index': 0, 'char_end_index': 9},
{'span': 'laksa', 'label': 'FOOD', 'score': 0.794235348701477, 'char_start_index': 93, 'char_end_index': 98},
{'span': 'Malaysia', 'label': 'LOC', 'score': 0.9999157190322876, 'char_start_index': 106, 'char_end_index': 114}
]
# missed: Hainanese chicken rice as FOOD
# missed: nasi lemak as FOOD
# missed: rendang as FOOD
# note: Unfortunately, this uncased version still fails to pick up those commonly lowercased food entities and even misses out on the capitalized `Hainanese chicken rice` entity.
```
#### Quick test on Chinese
```python
from span_marker import SpanMarkerModel
model = SpanMarkerModel.from_pretrained("lxyuan/span-marker-bert-base-multilingual-uncased-multinerd")
# translate to chinese
description = "Singapore is renowned for its hawker centers offering dishes \
like Hainanese chicken rice and laksa, while Malaysia boasts dishes such as \
nasi lemak and rendang, reflecting its rich culinary heritage."
zh_description = "新加坡因其小贩中心提供海南鸡饭和叻沙等菜肴而闻名, 而马来西亚则拥有椰浆饭和仁当等菜肴,反映了其丰富的烹饪传统."
entities = model.predict(zh_description)
entities
>>>
[
{'span': '新加坡', 'label': 'LOC', 'score': 0.8477746248245239, 'char_start_index': 0, 'char_end_index': 3},
{'span': '马来西亚', 'label': 'LOC', 'score': 0.7525337934494019, 'char_start_index': 27, 'char_end_index': 31}
]
# It only managed to capture two countries: Singapore and Malaysia.
# All other entities were missed out.
# Same prediction as the [uncased model](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-cased-multinerd)
```
### Quick Comparison on FOOD Entities
In this quick comparison, we found that a full stop punctuation mark seems to help the uncased model identify food entities,
regardless of whether they are capitalized or in uppercase. In contrast, the cased model doesn't respond well to full stops,
and adding them would lower the prediction score.
```python
from span_marker import SpanMarkerModel
cased_model = SpanMarkerModel.from_pretrained("lxyuan/span-marker-bert-base-multilingual-cased-multinerd")
uncased_model = SpanMarkerModel.from_pretrained("lxyuan/span-marker-bert-base-multilingual-uncased-multinerd")
# no full stop mark
uncased_model.predict("i love fried chicken and korea bbq")
>>> []
uncased_model.predict("i love fried chicken and korea BBQ") # Uppercase BBQ only
>>> []
uncased_model.predict("i love fried chicken and Korea BBQ") # Capitalize korea and uppercase BBQ
>>> []
# add full stop to get better result
uncased_model.predict("i love fried chicken and korea bbq.")
>>> [
{'span': 'fried chicken', 'label': 'FOOD', 'score': 0.6531468629837036, 'char_start_index': 7, 'char_end_index': 20},
{'span': 'korea bbq', 'label': 'FOOD', 'score': 0.9738698601722717, 'char_start_index': 25,'char_end_index': 34}
]
uncased_model.predict("i love fried chicken and korea BBQ.")
>>> [
{'span': 'fried chicken', 'label': 'FOOD', 'score': 0.6531468629837036, 'char_start_index': 7, 'char_end_index': 20},
{'span': 'korea BBQ', 'label': 'FOOD', 'score': 0.9738698601722717, 'char_start_index': 25, 'char_end_index': 34}
]
uncased_model.predict("i love fried chicken and Korea BBQ.")
>>> [
{'span': 'fried chicken', 'label': 'FOOD', 'score': 0.6531468629837036, 'char_start_index': 7, 'char_end_index': 20},
{'span': 'Korea BBQ', 'label': 'FOOD', 'score': 0.9738698601722717, 'char_start_index': 25, 'char_end_index': 34}
]
# no full stop mark
cased_model.predict("i love fried chicken and korea bbq")
>>> [
{'span': 'korea bbq', 'label': 'FOOD', 'score': 0.5054221749305725, 'char_start_index': 25, 'char_end_index': 34}
]
cased_model.predict("i love fried chicken and korea BBQ")
>>> [
{'span': 'korea BBQ', 'label': 'FOOD', 'score': 0.6987857222557068, 'char_start_index': 25, 'char_end_index': 34}
]
cased_model.predict("i love fried chicken and Korea BBQ")
>>> [
{'span': 'Korea BBQ', 'label': 'FOOD', 'score': 0.9755308032035828, 'char_start_index': 25, 'char_end_index': 34}
]
# add a fullstop mark hurt the cased model prediction score a little bit
cased_model.predict("i love fried chicken and korea bbq.")
>>> []
cased_model.predict("i love fried chicken and korea BBQ.")
>>> [
{'span': 'korea BBQ', 'label': 'FOOD', 'score': 0.5078140497207642, 'char_start_index': 25, 'char_end_index': 34}
]
cased_model.predict("i love fried chicken and Korea BBQ.")
>>> [
{'span': 'Korea BBQ', 'label': 'FOOD', 'score': 0.895089328289032, 'char_start_index': 25, 'char_end_index': 34}
]
```
## Training procedure
One can reproduce the result running this [script](https://huggingface.co/tomaarsen/span-marker-mbert-base-multinerd/blob/main/train.py)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0157 | 1.0 | 50369 | 0.0048 | 0.9143 | 0.8986 | 0.9064 | 0.9807 |
| 0.003 | 2.0 | 100738 | 0.0047 | 0.9237 | 0.9126 | 0.9181 | 0.9835 |
| 0.0017 | 3.0 | 151107 | 0.0054 | 0.9275 | 0.9147 | 0.9210 | 0.9842 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3 |
sbunlp/fabert | sbunlp | "2024-02-12T14:48:38Z" | 1,722 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"fa",
"arxiv:2402.06617",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-02-09T14:00:20Z" | ---
language:
- fa
library_name: transformers
widget:
- text: "ز سوزناکی گفتار من [MASK] بگریست"
example_title: "Poetry 1"
- text: "نظر از تو برنگیرم همه [MASK] تا بمیرم که تو در دلم نشستی و سر مقام داری"
example_title: "Poetry 2"
- text: "هر ساعتم اندرون بجوشد [MASK] را وآگاهی نیست مردم بیرون را"
example_title: "Poetry 3"
- text: "غلام همت آن رند عافیت سوزم که در گدا صفتی [MASK] داند"
example_title: "Poetry 4"
- text: "این [MASK] اولشه."
example_title: "Informal 1"
- text: "دیگه خسته شدم! [MASK] اینم شد کار؟!"
example_title: "Informal 2"
- text: "فکر نکنم به موقع برسیم. بهتره [MASK] این یکی بشیم."
example_title: "Informal 3"
- text: "تا صبح بیدار موندم و داشتم برای [MASK] آماده می شدم."
example_title: "Informal 4"
- text: "زندگی بدون [MASK] خستهکننده است."
example_title: "Formal 1"
- text: "در حکم اولیه این شرکت مجاز به فعالیت شد ولی پس از بررسی مجدد، مجوز این شرکت [MASK] شد."
example_title: "Formal 2"
---
# FaBERT: Pre-training BERT on Persian Blogs
## Model Details
FaBERT is a Persian BERT-base model trained on the diverse HmBlogs corpus, encompassing both casual and formal Persian texts. Developed for natural language processing tasks, FaBERT is a robust solution for processing Persian text. Through evaluation across various Natural Language Understanding (NLU) tasks, FaBERT consistently demonstrates notable improvements, while having a compact model size. Now available on Hugging Face, integrating FaBERT into your projects is hassle-free. Experience enhanced performance without added complexity as FaBERT tackles a variety of NLP tasks.
## Features
- Pre-trained on the diverse HmBlogs corpus consisting more than 50 GB of text from Persian Blogs
- Remarkable performance across various downstream NLP tasks
- BERT architecture with 124 million parameters
## Useful Links
- **Repository:** [FaBERT on Github](https://github.com/SBU-NLP-LAB/FaBERT)
- **Paper:** [arXiv preprint](https://arxiv.org/abs/2402.06617)
## Usage
### Loading the Model with MLM head
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("sbunlp/fabert") # make sure to use the default fast tokenizer
model = AutoModelForMaskedLM.from_pretrained("sbunlp/fabert")
```
### Downstream Tasks
Similar to the original English BERT, FaBERT can be fine-tuned on many downstream tasks.(https://huggingface.co/docs/transformers/en/training)
Examples on Persian datasets are available in our [GitHub repository](#useful-links).
**make sure to use the default Fast Tokenizer**
## Training Details
FaBERT was pre-trained with the MLM (WWM) objective, and the resulting perplexity on validation set was 7.76.
| Hyperparameter | Value |
|-------------------|:--------------:|
| Batch Size | 32 |
| Optimizer | Adam |
| Learning Rate | 6e-5 |
| Weight Decay | 0.01 |
| Total Steps | 18 Million |
| Warmup Steps | 1.8 Million |
| Precision Format | TF32 |
## Evaluation
Here are some key performance results for the FaBERT model:
**Sentiment Analysis**
| Task | FaBERT | ParsBERT | XLM-R |
|:-------------|:------:|:--------:|:-----:|
| MirasOpinion | **87.51** | 86.73 | 84.92 |
| MirasIrony | 74.82 | 71.08 | **75.51** |
| DeepSentiPers | **79.85** | 74.94 | 79.00 |
**Named Entity Recognition**
| Task | FaBERT | ParsBERT | XLM-R |
|:-------------|:------:|:--------:|:-----:|
| PEYMA | **91.39** | 91.24 | 90.91 |
| ParsTwiner | **82.22** | 81.13 | 79.50 |
| MultiCoNER v2 | 57.92 | **58.09** | 51.47 |
**Question Answering**
| Task | FaBERT | ParsBERT | XLM-R |
|:-------------|:------:|:--------:|:-----:|
| ParsiNLU | **55.87** | 44.89 | 42.55 |
| PQuAD | 87.34 | 86.89 | **87.60** |
| PCoQA | **53.51** | 50.96 | 51.12 |
**Natural Language Inference & QQP**
| Task | FaBERT | ParsBERT | XLM-R |
|:-------------|:------:|:--------:|:-----:|
| FarsTail | **84.45** | 82.52 | 83.50 |
| SBU-NLI | **66.65** | 58.41 | 58.85 |
| ParsiNLU QQP | **82.62** | 77.60 | 79.74 |
**Number of Parameters**
| | FaBERT | ParsBERT | XLM-R |
|:-------------|:------:|:--------:|:-----:|
| Parameter Count (M) | 124 | 162 | 278 |
| Vocabulary Size (K) | 50 | 100 | 250 |
For a more detailed performance analysis refer to the paper.
## How to Cite
If you use FaBERT in your research or projects, please cite it using the following BibTeX:
```bibtex
@article{masumi2024fabert,
title={FaBERT: Pre-training BERT on Persian Blogs},
author={Masumi, Mostafa and Majd, Seyed Soroush and Shamsfard, Mehrnoush and Beigy, Hamid},
journal={arXiv preprint arXiv:2402.06617},
year={2024}
}
```
|
numfa/numfalm_v2-1b | numfa | "2024-05-29T10:38:38Z" | 1,722 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"th",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-29T10:30:31Z" | ---
license: apache-2.0
language:
- en
- th
library_name: transformers
pipeline_tag: text-generation
---
# NumFa v2 (1B)
NumFa v2 1B is a LLM pretrained that has 1B.
Base model: TinyLLama
**For testing only**
## Model Details
### Model Description
The model was trained by TPU.
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** NumFa
- **Model type:** text-generation
- **Language(s) (NLP):** English
- **License:** apache-2.0
### Out-of-Scope Use
Math, Coding, and other language
## Bias, Risks, and Limitations
The model can has a bias from dataset. Use at your own risks!
## How to Get Started with the Model
Use the code below to get started with the model.
**Example**
```python
# !pip install accelerate sentencepiece transformers bitsandbytes
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="numfa/numfa_v2-1b", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
outputs = pipe("test is", max_new_tokens=300, do_sample=True, temperature=0.9, top_k=50, top_p=0.95, no_repeat_ngram_size=2,typical_p=1.)
print(outputs[0]["generated_text"])
``` |
MaziyarPanahi/TopEvolution7RAG-GGUF | MaziyarPanahi | "2024-06-15T13:17:01Z" | 1,722 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"base_model:DeepMount00/Mistral-RAG",
"base_model:mergekit-community/TopEvolutionWiz",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/TopEvolution7RAG"
] | text-generation | "2024-06-15T12:55:44Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- base_model:DeepMount00/Mistral-RAG
- base_model:mergekit-community/TopEvolutionWiz
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: TopEvolution7RAG-GGUF
base_model: mergekit-community/TopEvolution7RAG
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/TopEvolution7RAG-GGUF](https://huggingface.co/MaziyarPanahi/TopEvolution7RAG-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/TopEvolution7RAG](https://huggingface.co/mergekit-community/TopEvolution7RAG)
## Description
[MaziyarPanahi/TopEvolution7RAG-GGUF](https://huggingface.co/MaziyarPanahi/TopEvolution7RAG-GGUF) contains GGUF format model files for [mergekit-community/TopEvolution7RAG](https://huggingface.co/mergekit-community/TopEvolution7RAG).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
Ar4ikov/gpt2-650k-stable-diffusion-prompt-generator | Ar4ikov | "2023-03-22T00:57:27Z" | 1,721 | 37 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"art",
"code",
"en",
"dataset:Gustavosta/Stable-Diffusion-Prompts",
"dataset:bartman081523/stable-diffusion-discord-prompts",
"dataset:Ar4ikov/sd_filtered_2m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-01-11T22:57:19Z" | ---
license: mit
datasets:
- Gustavosta/Stable-Diffusion-Prompts
- bartman081523/stable-diffusion-discord-prompts
- Ar4ikov/sd_filtered_2m
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- art
- code
widget:
- text: A Tokio town landscape, sunset, by
- text: A Tokio town landscape, sunset
- text: 1girl, blue eyes, dark hair
- text: An astronaut, holding a wrench in outer space
- text: A fire soul eater demon
- text: A portrait of a beautiful woman
- text: A portret of an artist man, thick beard
---
# Stable Diffusion Prompt Generator
TODO: Complete me next time
## Introcude
...
```python
from transformers import pipeline
pipe = pipeline('text-generation', model_id='Ar4ikov/gpt2-650k-stable-diffusion-prompt-generator')
def get_valid_prompt(text: str) -> str:
dot_split = text.split('.')[0]
n_split = text.split('\n')[0]
return {
len(dot_split) < len(n_split): dot_split,
len(n_split) > len(dot_split): n_split,
len(n_split) == len(dot_split): dot_split
}[True]
prompt = 'A Tokio town landscape, sunset, by'
valid_prompt = get_valid_prompt(pipe(prompt, max_length=77)[0]['generated_text'])
print(valid_prompt)
# >>> A Tokio town landscape, sunset, by Greg Rutkowski,Artgerm,trending on Behance,light effect,high detail,3d sculpture,golden ratio,dramatic,dramatic background,digital art
``` |
ishorn5/RTLCoder-Deepseek-v1.1 | ishorn5 | "2024-02-25T13:36:46Z" | 1,721 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"arxiv:2312.08617",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-29T11:30:51Z" | ---
license: apache-2.0
---
Paper: https://arxiv.org/pdf/2312.08617.pdf |
InstantX/SD3-Controlnet-Pose | InstantX | "2024-06-19T05:30:01Z" | 1,721 | 32 | diffusers | [
"diffusers",
"safetensors",
"region:us"
] | null | "2024-06-15T17:05:20Z" | # SD3 Controlnet
| control image | weight=0.0 | weight=0.3 | weight=0.5 | weight=0.7 | weight=0.9 |
|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|
|<img src="./pose.jpg" width = "400" /> | <img src="./demo_0.jpg" width = "400" /> | <img src="./demo_3.jpg" width = "400" /> | <img src="./demo_5.jpg" width = "400" /> | <img src="./demo_7.jpg" width = "400" /> | <img src="./demo_9.jpg" width = "400" /> |
**Please ensure that the version of diffusers >= 0.30.0.dev0.**
```
# Demo
```python
import torch
from diffusers import StableDiffusion3ControlNetPipeline
from diffusers.models import SD3ControlNetModel, SD3MultiControlNetModel
from diffusers.utils import load_image
# load pipeline
controlnet = SD3ControlNetModel.from_pretrained("InstantX/SD3-Controlnet-Pose")
pipe = StableDiffusion3ControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-3-medium-diffusers",
controlnet=controlnet
)
pipe.to("cuda", torch.float16)
# config
control_image = load_image("https://huggingface.co/InstantX/SD3-Controlnet-Pose/resolve/main/pose.jpg")
prompt = 'Anime style illustration of a girl wearing a suit. A moon in sky. In the background we see a big rain approaching. text "InstantX" on image'
n_prompt = 'NSFW, nude, naked, porn, ugly'
image = pipe(
prompt,
negative_prompt=n_prompt,
control_image=control_image,
controlnet_conditioning_scale=0.5,
).images[0]
image.save('image.jpg')
```
## Limitation
Due to the fact that only 1024*1024 pixel resolution was used during the training phase,
the inference performs best at this size, with other sizes yielding suboptimal results.
We will initiate multi-resolution training in the future, and at that time, we will open-source the new weights.
|
MaziyarPanahi/mergekit-slerp-bqvdymn-GGUF | MaziyarPanahi | "2024-06-17T11:59:17Z" | 1,721 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:WizardLM/WizardMath-7B-V1.1",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-bqvdymn"
] | text-generation | "2024-06-17T11:35:39Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- conversational
- base_model:WizardLM/WizardMath-7B-V1.1
- base_model:NousResearch/Hermes-2-Pro-Mistral-7B
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-bqvdymn-GGUF
base_model: mergekit-community/mergekit-slerp-bqvdymn
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-bqvdymn-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-bqvdymn-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-bqvdymn](https://huggingface.co/mergekit-community/mergekit-slerp-bqvdymn)
## Description
[MaziyarPanahi/mergekit-slerp-bqvdymn-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-bqvdymn-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-bqvdymn](https://huggingface.co/mergekit-community/mergekit-slerp-bqvdymn).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
misri/bluePencilXL_v700 | misri | "2024-06-23T16:06:09Z" | 1,721 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-23T15:10:55Z" | ---
license: unknown
---
|
bmd1905/vietnamese-correction | bmd1905 | "2024-04-17T17:22:40Z" | 1,720 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"vi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-02-26T07:34:05Z" | ---
tags:
- generated_from_trainer
model-index:
- name: bartpho-syllable-finetuned-vietnamese_correction
results: []
license: apache-2.0
language:
- vi
---
# bartpho-syllable-finetuned-vietnamese_correction
This model is a fine-tuned version of [vinai/bartpho-syllable](https://github.com/VinAIResearch/BARTpho). The original dataset is avaiable at [@duyvuleo/VNTC](https://github.com/duyvuleo/VNTC), I customized it for error correction task, you can find my final dataset at [Huggingface Datasets](https://huggingface.co/datasets/bmd1905/error-correction-vi). All source code are avaible at [my Github repo](https://github.com/bmd1905/vietnamese-correction).
## Usage
```python
from transformers import pipeline
corrector = pipeline("text2text-generation", model="bmd1905/vietnamese-correction")
```
```python
# Example
MAX_LENGTH = 512
# Define the text samples
texts = [
"côn viec kin doanh thì rất kho khan nên toi quyết dinh chuyển sang nghề khac ",
"toi dang là sinh diên nam hai ở truong đạ hoc khoa jọc tự nhiên , trogn năm ke tiep toi sẽ chọn chuyen nganh về trí tue nhana tạo",
"Tôi đang học AI ở trun tam AI viet nam ",
"Nhưng sức huỷ divt của cơn bão mitch vẫn chưa thấm vào đâu lsovớithảm hoạ tại Bangladesh ăm 1970 ",
"Lần này anh Phươngqyết xếp hàng mua bằng được 1 chiếc",
"một số chuyen gia tài chính ngâSn hànG của Việt Nam cũng chung quan điểmnày",
"Cac so liệu cho thay ngươi dân viet nam đang sống trong 1 cuôc sóng không duojc nhu mong đọi",
"Nefn kinh té thé giới đang đúng trươc nguyen co của mọt cuoc suy thoai",
"Khong phai tất ca nhưng gi chung ta thấy dideu là sụ that",
"chinh phủ luôn cố găng het suc để naggna cao chat luong nền giáo duc =cua nuoc nhà",
"nèn kinh te thé giới đang đứng trươc nguy co của mọt cuoc suy thoai",
"kinh tế viet nam dang dứng truoc 1 thoi ky đổi mơi chưa tung có tienf lệ trong lịch sử"
]
# Batch prediction
predictions = corrector(texts, max_length=MAX_LENGTH)
# Print predictions
for text, pred in zip(texts, predictions):
print("- " + pred['generated_text'])
```
```
Output:
- Công việc kinh doanh thì rất khó khăn nên tôi quyết định chuyển sang nghề khác.
- Tôi đang là sinh viên hai ở trường đại học khoa học tự nhiên, trong năm kế tiếp, tôi sẽ chọn chuyên ngành về trí tuệ nhân tạo.
- Tôi đang học AI ở trung tâm AI Việt Nam.
- Nhưng sức huỷ diệt của cơn bão mitch vẫn chưa thấm vào đâu so với thảm hoạ tại Bangladesh năm 1970 .
- Lần này anh Phương quyết xếp hàng mua bằng được 1 chiếc.
- Một số chuyên gia tài chính ngân hàng của Việt Nam cũng chung quan điểm này.
- Các số liệu cho thấy ngươi dân Việt Nam đang sống trong 1 cuôc sóng không được nhu mong đọc.
- Niên kinh té thé giới đang đúng trương, nguyên cơ của một cuộc suy thoái.
- Không phai tất ca, nhưng giờ chúng ta thấy điều là sự thật.
- Chính phủ luôn cố găng hết sức để nâng cao chất lượng nền giáo dục của nước nhà.
- Nền kinh tế thế giới đang đứng trước nguy cơ của một cuộc suy thoái.
- Kinh tế Việt Nam đang đứng trước 1 thời kỳ đổi mới, chưa từng có tiền lệ trong lịch sử.
```
You can play around with my code at [Colab notebook](https://colab.research.google.com/github/bmd1905/vietnamese-correction/blob/main/inference.ipynb?hl=en). |
Korabbit/Llama-2-7b-chat-hf-afr-441step-flan-v2 | Korabbit | "2023-12-04T12:41:10Z" | 1,720 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:sordonia/flan-10k-flat",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-03T11:21:21Z" | ---
license: llama2
datasets:
- sordonia/flan-10k-flat
language:
- en
---
Hi there! this is a test of my "AFR training" approach
The base model used "Llama-2-7b-chat"
System Prompt: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. \n If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
Instruction: Implement binary search in python.
Response: Sure! Here is an example of how to implement a binary search algorithm in Python:
```
def binary_search(arr, target):
# Initialize the index of the target element
low = 0
high = len(arr) - 1
# Loop until the target element is found or the end of the array is reached
while low <= high:
# Calculate the middle index of the array
mid = (low + high) // 2
# Check if the target element is in the middle index
if arr[mid] == target:
return mid
# If the target element is not in the middle index, check if it is less than the middle index
if arr[mid] > target:
low = mid + 1
else:
high = mid - 1
# If the target element is not found in the array, return -1
return -1
# Test the binary search function
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
target = 5
result = binary_search(arr, target)
print(f"The target element {target} is at index {result}")
```
This code defines a `binary_search` function that takes an array and a target element as input, and returns the index of the target element in the array if it is found, or -1 if it is not found.
The function works by iteratively dividing the range of the array into two sub-ranges, one containing elements less than the target, and one containing elements greater than the target. It then checks the middle index of each sub-range to see if the target element is present. If the target element is found, the function returns the index of the middle index. If the target element is not found, the function returns -1.
The code also includes a test case to demonstrate how to use the `binary_search` function. In this case, the target element is 5, and the function returns the index of the element 5 in the array.
I hope this helps! Let me know if you have any questions. |
Locutusque/Llama-3-Orca-1.0-8B | Locutusque | "2024-04-21T00:14:25Z" | 1,720 | 24 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:jondurbin/airoboros-3.2",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/Code-Feedback",
"dataset:MaziyarPanahi/WizardLM_evol_instruct_V2_196k",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-19T04:49:06Z" | ---
library_name: transformers
license: other
datasets:
- Open-Orca/SlimOrca-Dedup
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
---
# Llama-3-Orca-1.0-8B
<!-- Provide a quick summary of what the model is/does. -->

## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
I fine-tuned llama-3 8B on mainly SlimOrca, along with other datasets to improve performance in math, coding, and writing.
- **Developed by:** Locutusque
- **Model type:** Built with Meta Llama 3
- **Language(s) (NLP):** Many?
- **License:** Llama 3 license https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
## Quants
### EXL2 [@bartowski](https://huggingface.co/bartowski/)
- https://huggingface.co/bartowski/Llama-3-Orca-1.0-8B-exl2
### GGUF [@bartowski](https://huggingface.co/bartowski/)
- https://huggingface.co/bartowski/Llama-3-Orca-1.0-8B-GGUF
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model has great performance in writing and coding.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Conversational AI.
|
MaziyarPanahi/Evolver-GGUF | MaziyarPanahi | "2024-06-15T17:13:11Z" | 1,720 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:mergekit-community/mergekit-slerp-ebgdloh",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/Evolver"
] | text-generation | "2024-06-15T16:51:15Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B
- base_model:mergekit-community/mergekit-slerp-ebgdloh
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Evolver-GGUF
base_model: mergekit-community/Evolver
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Evolver-GGUF](https://huggingface.co/MaziyarPanahi/Evolver-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/Evolver](https://huggingface.co/mergekit-community/Evolver)
## Description
[MaziyarPanahi/Evolver-GGUF](https://huggingface.co/MaziyarPanahi/Evolver-GGUF) contains GGUF format model files for [mergekit-community/Evolver](https://huggingface.co/mergekit-community/Evolver).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
facebook/blenderbot-1B-distill | facebook | "2023-03-30T16:12:16Z" | 1,719 | 36 | transformers | [
"transformers",
"pytorch",
"blenderbot",
"text2text-generation",
"convAI",
"conversational",
"facebook",
"en",
"dataset:blended_skill_talk",
"arxiv:1907.06616",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
language:
- en
thumbnail:
tags:
- convAI
- conversational
- facebook
license: apache-2.0
datasets:
- blended_skill_talk
metrics:
- perplexity
---
## Model description
+ Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616)
+ [Original PARLAI Code](https://parl.ai/projects/recipes/)
### Abstract
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
|
flair/upos-english-fast | flair | "2023-04-07T09:37:31Z" | 1,719 | 5 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:ontonotes",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- ontonotes
widget:
- text: "I love Berlin."
---
## English Universal Part-of-Speech Tagging in Flair (fast model)
This is the fast universal part-of-speech tagging model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **98,47** (Ontonotes)
Predicts universal POS tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
|ADJ | adjective |
| ADP | adposition |
| ADV | adverb |
| AUX | auxiliary |
| CCONJ | coordinating conjunction |
| DET | determiner |
| INTJ | interjection |
| NOUN | noun |
| NUM | numeral |
| PART | particle |
| PRON | pronoun |
| PROPN | proper noun |
| PUNCT | punctuation |
| SCONJ | subordinating conjunction |
| SYM | symbol |
| VERB | verb |
| X | other |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/upos-english-fast")
# make example sentence
sentence = Sentence("I love Berlin.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('pos'):
print(entity)
```
This yields the following output:
```
Span [1]: "I" [− Labels: PRON (0.9996)]
Span [2]: "love" [− Labels: VERB (1.0)]
Span [3]: "Berlin" [− Labels: PROPN (0.9986)]
Span [4]: "." [− Labels: PUNCT (1.0)]
```
So, the word "*I*" is labeled as a **pronoun** (PRON), "*love*" is labeled as a **verb** (VERB) and "*Berlin*" is labeled as a **proper noun** (PROPN) in the sentence "*I love Berlin*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import ColumnCorpus
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself)
corpus: Corpus = ColumnCorpus(
"resources/tasks/onto-ner",
column_format={0: "text", 1: "pos", 2: "upos", 3: "ner"},
tag_to_bioes="ner",
)
# 2. what tag do we want to predict?
tag_type = 'upos'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
# contextual string embeddings, forward
FlairEmbeddings('news-forward-fast'),
# contextual string embeddings, backward
FlairEmbeddings('news-backward-fast'),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/upos-english-fast',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
timm/caformer_s36.sail_in22k_ft_in1k | timm | "2023-05-05T05:53:07Z" | 1,719 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-05-05T05:52:43Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for caformer_s36.sail_in22k_ft_in1k
A CAFormer (a MetaFormer) image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 39.3
- GMACs: 8.0
- Activations (M): 37.5
- Image size: 224 x 224
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('caformer_s36.sail_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_s36.sail_in22k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 320, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_s36.sail_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
|
rinna/bilingual-gpt-neox-4b-instruction-sft | rinna | "2024-04-03T08:10:01Z" | 1,719 | 18 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"ja",
"en",
"dataset:Anthropic/hh-rlhf",
"arxiv:2404.01657",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-07-31T11:25:11Z" | ---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: mit
datasets:
- Anthropic/hh-rlhf
language:
- ja
- en
inference: false
---
# bilingual-gpt-neox-4b-instruction-sft

---
# Update
- **2023/08/02** We uploaded the newly trained `rinna/bilingual-gpt-neox-4b-instruction-sft` with the MIT license.
- Please refrain from using the previous model released on 2023/07/31 for commercial purposes if you have already downloaded it.
- The new model released on 2023/08/02 is built from datasets with less strict licenses and has better evaluation performance, so we suggest using the new model.
- For reference, we provide the MD5 checksum values for the `pytorch_model.bin` files of the previous and current models.
- 2023/07/31 model: `edf190a323c0ae63f71476700fb0b462`
- 2023/08/02 model: `de72aa5b66beee7b65783c96f687d186`
- **2023/07/31** In the previously released `rinna/bilingual-gpt-neox-4b-instruction-sft`, we found that part of the training data (i.e. Openchat ShareGPT4 and WizardLM) have a non-commercial license, and thus it does not comply with **the MIT license**. We decided to remove the previous version and build a new SFT model from datasets with less strict licenses. The new model will be uploaded in a few days. We sincerely apologize for our careless mistake.
---
# Overview
This repository provides an English-Japanese bilingual GPT-NeoX model of 3.8 billion parameters.
The model is based on [`rinna/bilingual-gpt-neox-4b`](https://huggingface.co/rinna/bilingual-gpt-neox-4b) and has been finetuned to serve as an instruction-following conversational agent.
* **Model architecture**
A 36-layer, 2816-hidden-size transformer-based language model.
* **Fine-tuning**
The fine-tuning data is the subset of the following datasets.
* [Anthropic HH RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf) and its Japanese translation
* [FLAN Instruction Tuning data](https://github.com/google-research/FLAN) and its Japanese translation
* **Model Series**
| Variant | Link |
| :-- | :--|
| Bilingual 4B MiniGPT4 | https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4 |
| Bilingual 4B PPO | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-ppo |
| Bilingual 4B SFT | https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft |
| Bilingual 4B 8K | https://huggingface.co/rinna/bilingual-gpt-neox-4b-8k |
| Bilingual 4B | https://huggingface.co/rinna/bilingual-gpt-neox-4b |
| Japanese 3.6B PPO | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo |
| Japanese 3.6B SFT-v2 | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 |
| Japanese 3.6B SFT | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft |
| Japanese 3.6B | https://huggingface.co/rinna/japanese-gpt-neox-3.6b |
* **Contributors**
[Tianyu Zhao](https://huggingface.co/tianyuz) and [Kei Sawada](https://huggingface.co/keisawada)
---
# Benchmarking
Our evaluation experiments suggest that the bilingual-gpt-neox-4b-instruction-sft model performs slightly better than the previous [Japanese GPT-NeoX 3.6B PPO](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo) in Japanese tasks.
- *The 4-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, and JSQuAD.*
- *The 6-task average accuracy is based on results of JCommonsenseQA, JNLI, MARC-ja, JSQuAD, XWinograd, and JAQKET-v2.*
| Model | 4-task average accuracy | 6-task average accuracy |
| :-- | :-- | :-- |
| bilingual-gpt-neox-4b-instruction-ppo | 61.01 | 61.16 |
| **bilingual-gpt-neox-4b-instruction-sft** | **61.02** | **61.69** |
| bilingual-gpt-neox-4b | 56.12 | 51.83 |
| japanese-gpt-neox-3.6b-instruction-ppo | 59.86 | 60.07 |
| japanese-gpt-neox-3.6b | 55.07 | 50.32 |
---
# I/O Format
A special format has been adopted to construct inputs.
* An input prompt is formatted as a conversation between `ユーザー` and `システム`.
* Each input utterance consists of (1) its speaker (`"ユーザー"` or `"システム"`), (2) a colon (`":"`), (3) a whitespace (`" "`), and (4) utterance text (e.g. `"世界で一番高い山は?"`).
* The input prompt should be ended with `"システム: "` to acknowledge the model to generate a response.
* All the utterances in the input prompt should be separated by a newline `\n`.
Following is an example to construct input from a conversation.
~~~python
prompt = [
{
"speaker": "ユーザー",
"text": "Hello, you are an assistant that helps me learn Japanese."
},
{
"speaker": "システム",
"text": "Sure, what can I do for you?"
},
{
"speaker": "ユーザー",
"text": "VRはなんですか。"
}
]
prompt = [
f"{uttr['speaker']}: {uttr['text']}"
for uttr in prompt
]
prompt = "\n".join(prompt)
prompt = (
prompt
+ "\n"
+ "システム: "
)
print(prompt)
"""
ユーザー: Hello, you are an assistant that helps me learn Japanese.
システム: Sure, what can I do for you?
ユーザー: VRはなんですか。
システム:
"""
~~~
---
# How to use the model
**Notice:** Since the model is **sensitive to decoding hyper-parameters** (e.g. `temperature`, `top_p`, `top_k`, `repetition_penalty`), it is suggested to explore the best setting for your task.
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-sft", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("rinna/bilingual-gpt-neox-4b-instruction-sft")
if torch.cuda.is_available():
model = model.to("cuda")
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=512,
do_sample=True,
temperature=1.0,
top_p=0.85,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1):])
print(output)
"""VRとはVirtual Realityの略で、仮想現実とも呼ばれます。これは、コンピューターを使用して仮想世界を作り出し、仮想世界上でコンピューターのゲームや仮想世界を体験するための技術です。この技術は、コンピューターやモバイ ルデバイスの進歩によって、2015年以降、ますます普及しています。VRは、ゲームや仮想世界、その他のアプリケー ションなどのさまざまな分野で、コンピューターと人間の相互作用の新しい方法を提供しています。</s>"""
~~~~
---
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer.
* The tokenizer has a vocabulary size of 65,536.
* It uses *byte fallback* to decompose unknown text pieces into UTF-8 byte pieces to avoid producing `<UNK>` tokens.
* It can recognize *consecutive whitespaces*, *newlines*, and *tabs* to handle structured texts better.
* We turned off the default behaviour of prepending leading whitespace because it is not beneficial for processing Japanese.
* Specifically, single whitespace is always processed as one token so that any English word won't have a preceding whitespace like in many other tokenizers (e.g. `_Hello`).
* This decision trades the English processing efficiency for a unified way to treat whitespaces.
* It leads to a significantly lower loss of next token prediction on English data because whitespaces are easy to predict.
* **Don't forget to set `use_fast=False` to make the above features function correctly.**
---
# How to cite
~~~
@misc{rinna-bilingual-gpt-neox-4b-instruction-sft,
title = {rinna/bilingual-gpt-neox-4b-instruction-sft},
author = {Zhao, Tianyu and Sawada, Kei},
url = {https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft},
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
url = {https://arxiv.org/abs/2404.01657},
}
~~~
---
# Licenese
[The MIT license](https://opensource.org/licenses/MIT) |
JosephusCheung/Qwen-LLaMAfied-7B-Chat | JosephusCheung | "2023-10-22T18:39:39Z" | 1,719 | 102 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"qwen",
"llama-2",
"en",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-04T08:43:39Z" | ---
language:
- en
- zh
tags:
- qwen
- llama
- llama-2
license: gpl-3.0
---
NEW VERSIONS: [https://huggingface.co/CausalLM/14B](https://huggingface.co/CausalLM/14B)
This is the LLaMAfied replica of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) (Original Version before 25.09.2023), recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models (using GPT2Tokenizer converted from the original tiktoken, by [vonjack](https://huggingface.co/vonjack)).
The model has been edited to be white-labelled, meaning the model will no longer call itself a Qwen.
Up until now, the model has undergone numerical alignment of weights and preliminary reinforcement learning in order to align with the original model. Some errors and outdated knowledge have been addressed through model editing methods. This model remains completely equivalent to the original version, without having any dedicated supervised finetuning on downstream tasks or other extensive conversation datasets.
PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
CURRENT MMLU: 53.48
CURRENT CEval (val): 54.13
```
MMLU - stem ACC: 46.40 Humanities ACC: 47.61 other ACC: 61.31 social ACC: 61.78 AVERAGE ACC:53.48
CEval (val) - STEM acc: 45.28 Social Science acc: 66.19 Humanities acc: 58.76 Other acc: 54.62 Hard acc:28.64 AVERAGE acc:54.13
```
Issue: Compared to the original Qwen-7B-Chat scoring 53.90 in MMLU and 54.18 in CEval (val), the our scores dropped slightly [-0.42 in MMLU, -0.05 in CEval (val)] due to insufficient realignment.
这是 [通义千问 Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) (在 25.09.2023 之前的原始版本) 的 LLaMA 化版本,经过重新校准以适应原始的类似 LLaMA/LLaMA-2 的模型结构。
您可以使用 LlamaCausalLM 进行模型推理,和 LLaMA/LLaMA-2 保持一致(使用由 [vonjack](https://huggingface.co/vonjack) 从原始 tiktoken 转换而来的 GPT2Tokenizer 分词器)。
模型已经被编辑实现白标化,不再自称通义千问。
到目前为止,该模型已经进行了权重的数值对齐和初步的强化学习,以与原始模型保持一致。 一些错误和过时的知识已通过模型编辑方法得到解决。 该模型与原始版本完全等效,尚未对下游任务或其他广泛的对话数据集进行任何专门的监督微调。
PROMPT 格式: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
当前的 MMLU: 53.48
当前的 CEval (val): 54.13
```
MMLU - stem ACC: 46.40 Humanities ACC: 47.61 other ACC: 61.31 social ACC: 61.78 AVERAGE ACC:53.48
CEval (val) - STEM acc: 45.28 Social Science acc: 66.19 Humanities acc: 58.76 Other acc: 54.62 Hard acc:28.64 AVERAGE acc:54.13
```
问题:相比原本的 Qwen-7B-Chat 的 MMLU 分数 53.90 和 CEval (val) 分数 54.18,由于不够充分的重新对齐,分数都略有下降 [-0.42 in MMLU, -0.05 in CEval (val)]。 |
RichardErkhov/google_-_codegemma-7b-it-gguf | RichardErkhov | "2024-04-14T19:42:03Z" | 1,719 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-04-14T16:07:06Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
codegemma-7b-it - GGUF
- Model creator: https://huggingface.co/google/
- Original model: https://huggingface.co/google/codegemma-7b-it/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [codegemma-7b-it.Q2_K.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q2_K.gguf) | Q2_K | 3.24GB |
| [codegemma-7b-it.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.IQ3_XS.gguf) | IQ3_XS | 3.54GB |
| [codegemma-7b-it.IQ3_S.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.IQ3_S.gguf) | IQ3_S | 3.71GB |
| [codegemma-7b-it.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q3_K_S.gguf) | Q3_K_S | 3.71GB |
| [codegemma-7b-it.IQ3_M.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.IQ3_M.gguf) | IQ3_M | 3.82GB |
| [codegemma-7b-it.Q3_K.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q3_K.gguf) | Q3_K | 4.07GB |
| [codegemma-7b-it.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q3_K_M.gguf) | Q3_K_M | 4.07GB |
| [codegemma-7b-it.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q3_K_L.gguf) | Q3_K_L | 4.39GB |
| [codegemma-7b-it.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.IQ4_XS.gguf) | IQ4_XS | 4.48GB |
| [codegemma-7b-it.Q4_0.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q4_0.gguf) | Q4_0 | 4.67GB |
| [codegemma-7b-it.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.IQ4_NL.gguf) | IQ4_NL | 4.69GB |
| [codegemma-7b-it.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q4_K_S.gguf) | Q4_K_S | 4.7GB |
| [codegemma-7b-it.Q4_K.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q4_K.gguf) | Q4_K | 4.96GB |
| [codegemma-7b-it.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q4_K_M.gguf) | Q4_K_M | 4.96GB |
| [codegemma-7b-it.Q4_1.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q4_1.gguf) | Q4_1 | 5.12GB |
| [codegemma-7b-it.Q5_0.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q5_0.gguf) | Q5_0 | 5.57GB |
| [codegemma-7b-it.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q5_K_S.gguf) | Q5_K_S | 5.57GB |
| [codegemma-7b-it.Q5_K.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q5_K.gguf) | Q5_K | 5.72GB |
| [codegemma-7b-it.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q5_K_M.gguf) | Q5_K_M | 5.72GB |
| [codegemma-7b-it.Q5_1.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q5_1.gguf) | Q5_1 | 6.02GB |
| [codegemma-7b-it.Q6_K.gguf](https://huggingface.co/RichardErkhov/google_-_codegemma-7b-it-gguf/blob/main/codegemma-7b-it.Q6_K.gguf) | Q6_K | 6.53GB |
Original model description:
Repo model google/codegemma-7b-it is gated. You must be authenticated to access it.
|
John6666/persona-style-checkpoint-pony-xl-sdxl | John6666 | "2024-06-10T22:21:20Z" | 1,719 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"game",
"persona",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-10T22:16:50Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- game
- persona
---
Original model is [here](https://civitai.com/models/31771?modelVersionId=563439).
|
KoboldAI/OPT-350M-Erebus | KoboldAI | "2023-06-23T00:03:22Z" | 1,718 | 15 | transformers | [
"transformers",
"pytorch",
"safetensors",
"opt",
"text-generation",
"en",
"arxiv:2205.01068",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-11-13T11:56:06Z" | ---
language: en
license: other
commercial: no
inference: false
---
# OPT 350M - Erebus
## Model description
This is the second generation of the original Shinen made by Mr. Seeker. The full dataset consists of 6 different sources, all surrounding the "Adult" theme. The name "Erebus" comes from the greek mythology, also named "darkness". This is in line with Shin'en, or "deep abyss". For inquiries, please contact the KoboldAI community. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training data
The data can be divided in 6 different datasets:
- Literotica (everything with 4.5/5 or higher)
- Sexstories (everything with 90 or higher)
- Dataset-G (private dataset of X-rated stories)
- Doc's Lab (all stories)
- Pike Dataset (novels with "adult" rating)
- SoFurry (collection of various animals)
The dataset uses `[Genre: <comma-separated list of genres>]` for tagging.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/OPT-350M-Erebus')
>>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
[{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
```
## Limitations and biases
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). **Warning: This model has a very strong NSFW bias!**
### License
OPT-350M is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
### BibTeX entry and citation info
```
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Gryphe/MythoBoros-13b | Gryphe | "2023-08-04T09:53:03Z" | 1,718 | 13 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-07-22T09:05:00Z" | ---
license: other
language:
- en
---
## Model details
MythoBoros-13b can be considered a sister model to [MythoLogic-13b](https://huggingface.co/Gryphe/MythoLogic-13b), sharing the same goals but having a different approach.
Whereas the previous model was a series of experimental gradient merges, this one is a simple straight-up 66/34 merge of [Chronos](https://huggingface.co/elinas/chronos-13b) and the freshly released [Ouroboros](https://huggingface.co/CalderaAI/13B-Ouroboros), providing a very solid foundation for a well-performing roleplaying model.
MythoBoros tends to be somewhat more formal with its responses in comparison to MythoLogic.
My advice? Try both, see which one you prefer.
Quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoBoros-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoBoros-13B-GPTQ) (You're the best!)
## Prompt Format
This model works best with Alpaca formatting, so for optimal model performance, use:
```
<System prompt/Character Card>
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
``` |
cognitivecomputations/dolphin-2.2-yi-34b-200k | cognitivecomputations | "2024-06-26T02:32:08Z" | 1,718 | 35 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/samantha-data",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-07T22:07:55Z" | ---
language:
- en
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/samantha-data
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
model-index:
- name: dolphin-2.2-yi-34b-200k
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 42.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.2-yi-34b-200k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 68.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.2-yi-34b-200k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.47
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.2-yi-34b-200k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.93
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.2-yi-34b-200k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.2-yi-34b-200k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 3.71
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/dolphin-2.2-yi-34b-200k
name: Open LLM Leaderboard
license: apache-2.0
---
Dolphin 2.2 🐬
https://erichartford.com/dolphin
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" />
Dolphin-2.2-Yi-34b-200k's training was sponsored by [convai](https://www.convai.com/).
This model is based on Yi, and is subject to Yi license.
The base model has 200k context, I finetuned it with 16k.
Note: No longer need trust_remote_code! Thank you Yi team!
New in 2.2 is conversation and empathy. With an infusion of curated Samantha and WizardLM DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
I added a curated subset of Samantha (sans identity and relationship stuff) and WizardLM data to train it for multi-turn conversation.
## Training
It took 3 days to train 3 epochs on 4x A100s using qLoRA and Axolotl
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|>
<|im_start|>user
Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|>
<|im_start|>assistant
```
## Gratitude
- This model was made possible by the generous sponsorship of Convai.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to Wing Lian, and TheBloke for helpful advice
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output
Example output:
```
<|im_start|>system
You are a helpful AI assistant<|im_end|>
<|im_start|>user
Which theory of everything is most likely correct?<|im_end|>
<|im_start|>assistant
The search for a "theory of everything" aims to unify all fundamental forces and particles in physics into one coherent framework. Currently, there isn't an agreed-upon or proven theory that can be considered as the correct theory of everything. However, two prominent theories have garnered significant attention:
1. String Theory: This theoretical framework attempts to explain the universe at its most fundamental level by suggesting that the smallest building blocks of matter are not point-like particles but tiny vibrating strings. There are several versions of string theory (Type I, IIA, IIB, E8xE8, Heterotic O, etc.) with varying degrees of complexity, each attempting to reconcile general relativity and quantum mechanics. Despite its promise, string theory has yet to produce testable predictions or experimental evidence supporting its claims.
2. Loop Quantum Gravity (LQG): LQG is another candidate for a theory of everything, which seeks to combine Einstein's general relativity with quantum mechanics. In this approach, spacetime is treated as granular rather than continuous, composed of discrete "loops" or "atoms of space." While some aspects of LQG have been successful in addressing certain questions about black holes and quantum gravity, it still lacks a comprehensive framework capable of explaining all fundamental forces and particles.
Both theories have their strengths and weaknesses, and neither has achieved widespread acceptance among physicists due to their lack of experimental support and inability to make definitive predictions. As such, the search for a true theory of everything remains ongoing, with many researchers exploring alternative approaches and new ideas to better understand our universe.
```
[If you would like to financially support my efforts](https://ko-fi.com/erichartford)
[I also have some swag you can buy](https://fa7113.myshopify.com/)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-2.2-yi-34b-200k)
| Metric |Value|
|---------------------------------|----:|
|Avg. |46.67|
|AI2 Reasoning Challenge (25-Shot)|42.15|
|HellaSwag (10-Shot) |68.18|
|MMLU (5-Shot) |55.47|
|TruthfulQA (0-shot) |45.93|
|Winogrande (5-shot) |64.56|
|GSM8k (5-shot) | 3.71|
|
google/rembert | google | "2022-05-27T15:05:23Z" | 1,717 | 14 | transformers | [
"transformers",
"pytorch",
"tf",
"rembert",
"multilingual",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"bs",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"hr",
"ht",
"hu",
"hy",
"id",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:wikipedia",
"arxiv:2010.12821",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- bs
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- hr
- ht
- hu
- hy
- id
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
license: apache-2.0
datasets:
- wikipedia
---
# RemBERT (for classification)
Pretrained RemBERT model on 110 languages using a masked language modeling (MLM) objective. It was introduced in the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821). A direct export of the model checkpoint was first made available in [this repository](https://github.com/google-research/google-research/tree/master/rembert). This version of the checkpoint is lightweight since it is meant to be finetuned for classification and excludes the output embedding weights.
## Model description
RemBERT's main difference with mBERT is that the input and output embeddings are not tied. Instead, RemBERT uses small input embeddings and larger output embeddings. This makes the model more efficient since the output embeddings are discarded during fine-tuning. It is also more accurate, especially when reinvesting the input embeddings' parameters into the core model, as is done on RemBERT.
## Intended uses & limitations
You should fine-tune this model for your downstream task. It is meant to be a general-purpose model, similar to mBERT. In our [paper](https://arxiv.org/abs/2010.12821), we have successfully applied this model to tasks such as classification, question answering, NER, POS-tagging. For tasks such as text generation you should look at models like GPT2.
## Training data
The RemBERT model was pretrained on multilingual Wikipedia data over 110 languages. The full language list is on [this repository](https://github.com/google-research/google-research/tree/master/rembert)
### BibTeX entry and citation info
```bibtex
@inproceedings{DBLP:conf/iclr/ChungFTJR21,
author = {Hyung Won Chung and
Thibault F{\'{e}}vry and
Henry Tsai and
Melvin Johnson and
Sebastian Ruder},
title = {Rethinking Embedding Coupling in Pre-trained Language Models},
booktitle = {9th International Conference on Learning Representations, {ICLR} 2021,
Virtual Event, Austria, May 3-7, 2021},
publisher = {OpenReview.net},
year = {2021},
url = {https://openreview.net/forum?id=xpFFI\_NtgpW},
timestamp = {Wed, 23 Jun 2021 17:36:39 +0200},
biburl = {https://dblp.org/rec/conf/iclr/ChungFTJR21.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
internlm/internlm-xcomposer-7b | internlm | "2023-12-25T12:03:58Z" | 1,717 | 15 | transformers | [
"transformers",
"pytorch",
"InternLMXComposer",
"feature-extraction",
"text-generation",
"custom_code",
"license:apache-2.0",
"region:us"
] | text-generation | "2023-09-26T03:39:53Z" | ---
license: apache-2.0
pipeline_tag: text-generation
---
<p align="center">
<img src="logo.png" width="400"/>
<p>
<p align="center">
<b><font size="6">InternLM-XComposer</font></b>
<p>
<div align="center">
[💻Github Repo](https://github.com/InternLM/InternLM-XComposer)
</div>
**InternLM-XComposer** is a vision-language large model (VLLM) based on [InternLM](https://github.com/InternLM/InternLM/tree/main) for advanced text-image comprehension and composition. InternLM-XComposer has serveal appealing properties:
- **Interleaved Text-Image Composition**: InternLM-XComposer can effortlessly generate coherent and contextual articles that seamlessly integrate images, providing a more engaging and immersive reading experience. The interleaved text-image composition is implemented in following steps:
1. **Text Generation**: It crafts long-form text based on human-provided instructions.
2. **Image Spoting and Captioning**: It pinpoints optimal locations for image placement and furnishes image descriptions.
3. **Image Retrieval and Selection**: It select image candidates and identify the image that optimally complements the content.
- **Comprehension with Rich Multilingual Knowledge**: The text-image comprehension is empowered by training on extensive multi-modal multilingual concepts with carefully crafted strategies, resulting in a deep understanding of visual content.
- **Strong performance**: It consistently achieves state-of-the-art results across various benchmarks for vision-language large models, including [MME Benchmark](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) (English), [MMBench](https://opencompass.org.cn/leaderboard-multimodal) (English), [Seed-Bench](https://huggingface.co/spaces/AILab-CVC/SEED-Bench_Leaderboard) (English), [CCBench](https://opencompass.org.cn/leaderboard-multimodal)(Chinese), and [MMBench-CN](https://opencompass.org.cn/leaderboard-multimodal) (Chineese).
We release InternLM-XComposer series in two versions:
- InternLM-XComposer-VL: The pretrained VLLM model with InternLM as the initialization of the LLM, achieving strong performance on various multimodal benchmarks, e.g., MME Benchmark, MMBench Seed-Bench, CCBench, and MMBench-CN.
- InternLM-XComposer: The finetuned VLLM for *Interleaved Text-Image Composition* and *LLM-based AI assistant*.
<br> |
LoneStriker/Blue-Orchid-2x7b-GGUF | LoneStriker | "2024-02-01T19:31:24Z" | 1,717 | 5 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | "2024-02-01T19:00:42Z" | ---
license: apache-2.0
---
**Blue-Orchid-2x7b**
Roleplaying focused MoE Mistral model.
One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B.
- Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot.
- Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot.
## Prompt template (LimaRP):
```
### Instruction:
{system prompt}
### Input:
User: {prompt}
### Response:
Character:
```
Alpaca prompt template should work fine too. |
openbmb/Eurus-RM-7b | openbmb | "2024-05-14T12:09:46Z" | 1,717 | 23 | transformers | [
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"reward_model",
"text-classification",
"custom_code",
"dataset:openbmb/UltraFeedback",
"dataset:openbmb/UltraInteract_pair",
"dataset:openbmb/UltraSafety",
"arxiv:2404.02078",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-classification | "2024-04-02T08:36:55Z" | ---
license: apache-2.0
datasets:
- openbmb/UltraFeedback
- openbmb/UltraInteract_pair
- openbmb/UltraSafety
tags:
- reward_model
pipeline_tag: text-classification
---
# Links
- 📜 [Paper](https://arxiv.org/abs/2404.02078)
- 🤗 [Eurus Collection](https://huggingface.co/collections/openbmb/eurus-660bc40bec5376b3adc9d1c5)
- 🤗 UltraInteract
- [SFT](https://huggingface.co/datasets/openbmb/UltraInteract_sft)
- [Preference Learning](https://huggingface.co/datasets/openbmb/UltraInteract_pair)
- [GitHub Repo](https://github.com/OpenBMB/Eurus)
# Introduction
Eurus-RM-7B is trained on a mixture of [UltraInteract](https://huggingface.co/datasets/openbmb/UltraInteract), [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback), and [UltraSafety](https://huggingface.co/datasets/openbmb/UltraSafety), with a specifically designed reward modeling objective for reasoning to directly increase.
- EURUS-RM-7B stands out as the best 7B RM overall and achieves similar or better performance than much larger baselines. Particularly, it outperforms GPT-4 in certain tasks.
- Our training objective is beneficial in improving RM performance on hard problems and reasoning.
- ULTRAINTERACT is compatible with other datasets like UltraFeedback and UltraSafety, and mixing these datasets can balance different RM abilities.
- EURUS-RM-7B improves LLMs’ reasoning performance by a large margin through reranking.
## Usage
```python
from transformers import AutoTokenizer, AutoModel
import torch
def test(model_path):
dataset = [ # cases in webgpt; we use the same template as Mistral-Instruct-v0.2
{
"chosen": "[INST] Sural relates to which part of the body? [/INST] The sural region is the muscular swelling of the back of the leg below the knee, formed chiefly by the bellies of the gastrocnemius and soleus muscles [1,2].",
"rejected": "[INST] Sural relates to which part of the body? [/INST] The Sural nerve runs down the side of the leg near the small saphenous vein, then passes forward below the lateral malleolus and continues on the outside of the foot as the lateral dorsal cutaneous nerve, which then communicates with the intermediate dorsal cutaneous nerve, which branches off to the side of the foot. [1]",
}
]
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
with torch.no_grad():
for example in dataset:
inputs = tokenizer(example["chosen"], return_tensors="pt")
chosen_reward = model(**inputs).item()
inputs = tokenizer(example["rejected"], return_tensors="pt")
rejected_reward = model(**inputs).item()
print(chosen_reward - rejected_reward)
test("openbmb/Eurus-RM-7b")
# Output: 47.4404296875
```
## Evaluation
- Eurus-RM-7B stands out as the best 7B RM overall and achieves similar or better performance than much larger baselines. Particularly, it outperforms GPT-4 in certain tasks.
- Our training objective is beneficial in improving RM performance on hard problems and reasoning.
- ULTRAINTERACT is compatible with other datasets like UltraFeedback and UltraSafety, and mixing these datasets can balance different RM abilities.
- Eurus-RM-7B improves LLMs’ reasoning performance by a large margin through reranking.
<img src="./figures/rm_exp.png" alt="stats" style="zoom: 40%;" />
## Citation
```
@misc{yuan2024advancing,
title={Advancing LLM Reasoning Generalists with Preference Trees},
author={Lifan Yuan and Ganqu Cui and Hanbin Wang and Ning Ding and Xingyao Wang and Jia Deng and Boji Shan and Huimin Chen and Ruobing Xie and Yankai Lin and Zhenghao Liu and Bowen Zhou and Hao Peng and Zhiyuan Liu and Maosong Sun},
year={2024},
eprint={2404.02078},
archivePrefix={arXiv},
}
``` |
MaziyarPanahi/TopEvolution16-GGUF | MaziyarPanahi | "2024-06-15T16:10:04Z" | 1,717 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:mergekit-community/TopEvolution",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/TopEvolution16"
] | text-generation | "2024-06-15T15:48:56Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- base_model:NousResearch/Hermes-2-Theta-Llama-3-8B
- base_model:mergekit-community/TopEvolution
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: TopEvolution16-GGUF
base_model: mergekit-community/TopEvolution16
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/TopEvolution16-GGUF](https://huggingface.co/MaziyarPanahi/TopEvolution16-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/TopEvolution16](https://huggingface.co/mergekit-community/TopEvolution16)
## Description
[MaziyarPanahi/TopEvolution16-GGUF](https://huggingface.co/MaziyarPanahi/TopEvolution16-GGUF) contains GGUF format model files for [mergekit-community/TopEvolution16](https://huggingface.co/mergekit-community/TopEvolution16).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
MaziyarPanahi/mergekit-slerp-npojgos-GGUF | MaziyarPanahi | "2024-06-16T10:00:24Z" | 1,717 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:WizardLM/WizardMath-7B-V1.1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-npojgos"
] | text-generation | "2024-06-16T09:38:16Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- conversational
- base_model:NousResearch/Hermes-2-Pro-Mistral-7B
- base_model:WizardLM/WizardMath-7B-V1.1
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-npojgos-GGUF
base_model: mergekit-community/mergekit-slerp-npojgos
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-npojgos-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-npojgos-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-npojgos](https://huggingface.co/mergekit-community/mergekit-slerp-npojgos)
## Description
[MaziyarPanahi/mergekit-slerp-npojgos-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-npojgos-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-npojgos](https://huggingface.co/mergekit-community/mergekit-slerp-npojgos).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf | RichardErkhov | "2024-06-27T12:14:34Z" | 1,717 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-27T12:07:48Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen1.5-Wukong-0.5B - GGUF
- Model creator: https://huggingface.co/RESMPDEV/
- Original model: https://huggingface.co/RESMPDEV/Qwen1.5-Wukong-0.5B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen1.5-Wukong-0.5B.Q2_K.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q2_K.gguf) | Q2_K | 0.23GB |
| [Qwen1.5-Wukong-0.5B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.IQ3_XS.gguf) | IQ3_XS | 0.24GB |
| [Qwen1.5-Wukong-0.5B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.IQ3_S.gguf) | IQ3_S | 0.25GB |
| [Qwen1.5-Wukong-0.5B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.25GB |
| [Qwen1.5-Wukong-0.5B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.IQ3_M.gguf) | IQ3_M | 0.26GB |
| [Qwen1.5-Wukong-0.5B.Q3_K.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q3_K.gguf) | Q3_K | 0.26GB |
| [Qwen1.5-Wukong-0.5B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.26GB |
| [Qwen1.5-Wukong-0.5B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.28GB |
| [Qwen1.5-Wukong-0.5B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.28GB |
| [Qwen1.5-Wukong-0.5B.Q4_0.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q4_0.gguf) | Q4_0 | 0.29GB |
| [Qwen1.5-Wukong-0.5B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.IQ4_NL.gguf) | IQ4_NL | 0.29GB |
| [Qwen1.5-Wukong-0.5B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.29GB |
| [Qwen1.5-Wukong-0.5B.Q4_K.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q4_K.gguf) | Q4_K | 0.3GB |
| [Qwen1.5-Wukong-0.5B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.3GB |
| [Qwen1.5-Wukong-0.5B.Q4_1.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q4_1.gguf) | Q4_1 | 0.3GB |
| [Qwen1.5-Wukong-0.5B.Q5_0.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q5_0.gguf) | Q5_0 | 0.32GB |
| [Qwen1.5-Wukong-0.5B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.32GB |
| [Qwen1.5-Wukong-0.5B.Q5_K.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q5_K.gguf) | Q5_K | 0.33GB |
| [Qwen1.5-Wukong-0.5B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.33GB |
| [Qwen1.5-Wukong-0.5B.Q5_1.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q5_1.gguf) | Q5_1 | 0.34GB |
| [Qwen1.5-Wukong-0.5B.Q6_K.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q6_K.gguf) | Q6_K | 0.36GB |
| [Qwen1.5-Wukong-0.5B.Q8_0.gguf](https://huggingface.co/RichardErkhov/RESMPDEV_-_Qwen1.5-Wukong-0.5B-gguf/blob/main/Qwen1.5-Wukong-0.5B.Q8_0.gguf) | Q8_0 | 0.47GB |
Original model description:
---
language:
- en
- zh
license: other
datasets:
- teknium/OpenHermes-2.5
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE
pipeline_tag: text-generation
model-index:
- name: Qwen1.5-Wukong-0.5B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 31.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Qwen1.5-Wukong-0.5B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 47.78
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Qwen1.5-Wukong-0.5B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 38.44
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Qwen1.5-Wukong-0.5B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 38.92
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Qwen1.5-Wukong-0.5B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Qwen1.5-Wukong-0.5B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 15.54
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RESMPDEV/Qwen1.5-Wukong-0.5B
name: Open LLM Leaderboard
---

# Qwen1.5-Wukong-0.5B
Join Our Discord! https://discord.gg/cognitivecomputations
Qwen1.5-Wukong-0.5B is a dealigned chat finetune of the original fantastic Qwen1.5-0.5B model by the Qwen team.
This model was trained on the teknium OpenHeremes-2.5 dataset and some supplementary datasets from Cognitive Computations https://erichartford.com/dolphin 🐬
This model was trained for 3 epochs over 3 3090's.
# Example Outputs
TBD
# Orignal Model Card Below
# Qwen1.5-0.5B
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
* Significant performance improvement in Chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'.
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_RESMPDEV__Qwen1.5-Wukong-0.5B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |38.15|
|AI2 Reasoning Challenge (25-Shot)|31.74|
|HellaSwag (10-Shot) |47.78|
|MMLU (5-Shot) |38.44|
|TruthfulQA (0-shot) |38.92|
|Winogrande (5-shot) |56.51|
|GSM8k (5-shot) |15.54|
|
ausboss/llama-30b-supercot | ausboss | "2023-05-23T20:57:23Z" | 1,716 | 126 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-21T16:03:52Z" | Merge of [huggyllama/llama-30b](https://huggingface.co/huggyllama/llama-30b) + [kaiokendev/SuperCOT-LoRA](https://huggingface.co/kaiokendev/SuperCOT-LoRA/edit/main/README.md)
Supercot was trained to work with langchain prompting.
Load up locally in my custom LLM notebook that uses the Oobabooga modules to load up models: https://github.com/ausboss/Local-LLM-Langchain
Then you can add cells from of these other notebooks for testing: https://github.com/gkamradt/langchain-tutorials
# From Koikendev Lora page
### Compatibility
This LoRA is compatible with any 7B, 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins
### Prompting
You should prompt the LoRA the same way you would prompt Alpaca or Alpacino:
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
<instruction>
### Input:
<any additional context. Remove this if it's not neccesary>
### Response:
<make sure to leave a single new-line here for optimal results>
```
Remember that with lower parameter sizes, the structure of the prompt becomes more important. The same prompt worded differently can give wildly different answers. Consider using the following suggestion suffixes to improve output quality:
- "Think through this step by step"
- "Let's think about this logically"
- "Explain your reasoning"
- "Provide details to support your answer"
- "Compare and contrast your answer with alternatives"
### Coming Soon
- Tweet fix for 13B and 7B - lower model sizes seem to be extremely sensitive to hashtags at the end of training data responses, especially at longer cutoffs |
RWKV/rwkv-4-14b-pile | RWKV | "2023-05-15T10:06:18Z" | 1,716 | 3 | transformers | [
"transformers",
"pytorch",
"rwkv",
"text-generation",
"dataset:EleutherAI/pile",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-05T11:51:43Z" | ---
datasets:
- EleutherAI/pile
---

# Model card for RWKV-4 | 14B parameters trained on Pile dataset
RWKV is a project led by [Bo Peng](https://github.com/BlinkDL). Learn more about the model architecture in the blogposts from Johan Wind [here](https://johanwind.github.io/2023/03/23/rwkv_overview.html) and [here](https://johanwind.github.io/2023/03/23/rwkv_details.html). Learn more about the project by joining the [RWKV discord server](https://discordapp.com/users/468093332535640064).
# Table of contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Citation](#citation)
## TL;DR
Below is the description from the [original repository](https://github.com/BlinkDL/RWKV-LM)
> RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). It's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
## Model Details
The details of the architecture can be found on the blogpost mentioned above and the Hugging Face blogpost of the integration.
## Usage
### Convert the raw weights to the HF format
You can use the [`convert_rwkv_checkpoint_to_hf.py`](https://github.com/huggingface/transformers/tree/main/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py) script by specifying the repo_id of the original weights, the filename and the output directory. You can also optionally directly push the converted model on the Hub by passing `--push_to_hub` flag and `--model_name` argument to specify where to push the converted weights.
```bash
python convert_rwkv_checkpoint_to_hf.py --repo_id RAW_HUB_REPO --checkpoint_file RAW_FILE --output_dir OUTPUT_DIR --push_to_hub --model_name dummy_user/converted-rwkv
```
### Generate text
You can use the `AutoModelForCausalLM` and `AutoTokenizer` classes to generate texts from the model. Expand the sections below to understand how to run the model in different scenarios:
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-14b-pile")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-14b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
### Running the model on a single GPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-14b-pile").to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-14b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
</details>
### Running the model in half-precision, on GPU
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-14b-pile", torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-14b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
### Running the model multiple GPUs
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-14b-pile", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-14b-pile")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
## Citation
If you use this model, please consider citing the original work, from the original repo [here](https://github.com/BlinkDL/ChatRWKV/) |
ericzzz/falcon-rw-1b-chat | ericzzz | "2024-03-05T00:48:21Z" | 1,716 | 3 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"text-generation-inference",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:Open-Orca/SlimOrca",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] | text-generation | "2023-12-05T02:03:55Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
datasets:
- HuggingFaceH4/ultrachat_200k
- openchat/openchat_sharegpt4_dataset
- Open-Orca/SlimOrca
inference: false
model-index:
- name: falcon-rw-1b-chat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 35.58
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 61.12
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.51
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 39.62
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.72
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.67
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ericzzz/falcon-rw-1b-chat
name: Open LLM Leaderboard
pipeline_tag: text-generation
---
# 🌟 Falcon-RW-1B-Chat
**Falcon-RW-1B-Chat is a conversational model with 1 billion parameters.** It's a further refinement of the [Falcon-RW-1B-Instruct-OpenOrca](https://huggingface.co/ericzzz/falcon-rw-1b-instruct-openorca), trained on selected data from the [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) and [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset) datasets.
**✨Try it out at our [Tiny Chat](https://huggingface.co/spaces/ericzzz/tiny-chat) space running on free-tier hardware!✨**
The underlying Falcon-RW-1B-Instruct-OpenOrca model is built on the [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b), a causal decoder-only model. It has been instruction-finetuned using the [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset.
**🎯 Purpose**
The Falcon-RW-1B-Chat aims to add conversational capabilities to the Falcon-RW-1B-Instruct-OpenOrca model. This initiative is driven by the need for a smaller, open-source, instruction-finetuned, ready-to-use model, suitable for users with limited computational resources, like lower-end consumer GPUs.
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ericzzz__falcon-rw-1b-chat)
| Metric |Value|
|---------------------------------|----:|
|Avg. |37.37|
|AI2 Reasoning Challenge (25-Shot)|35.58|
|HellaSwag (10-Shot) |61.12|
|MMLU (5-Shot) |24.51|
|TruthfulQA (0-shot) |39.62|
|Winogrande (5-shot) |61.72|
|GSM8k (5-shot) | 1.67|
## 📖 Example Code
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "ericzzz/falcon-rw-1b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", torch_dtype=torch.bfloat16
)
chat_history = [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hello! How can I assist you today?"},
{"role": "user", "content": "Explain what AI is."},
]
input_ids = tokenizer.apply_chat_template(
chat_history, tokenize=True, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
output_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.7,
repetition_penalty=1.05,
max_new_tokens=200,
)
output_text = tokenizer.decode(
output_tokens[0][len(input_ids[0]) :], skip_special_tokens=True
)
print(output_text)
```
## ⚠️ Limitations
This model may generate inaccurate or misleading information and is prone to hallucination, creating plausible but false narratives. It lacks the ability to discern factual content from fiction and may inadvertently produce biased, harmful or offensive content. Its understanding of complex, nuanced queries is limited. Users should be aware of this and verify any information obtained from the model.
The model is provided 'as is' without any warranties, and the creators are not liable for any damages arising from its use. Users are responsible for their interactions with the model.
## 📬 Contact
For further inquiries or feedback, please contact at [email protected].
|
RichardErkhov/mistral-community_-_Mixtral-8x22B-v0.1-gguf | RichardErkhov | "2024-04-12T20:54:49Z" | 1,716 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-04-12T10:20:55Z" | Entry not found |
MaziyarPanahi/mergekit-slerp-ebgdloh-GGUF | MaziyarPanahi | "2024-06-16T03:42:39Z" | 1,716 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"base_model:fblgit/UNA-TheBeagle-7b-v1",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-ebgdloh"
] | text-generation | "2024-06-16T03:20:44Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- base_model:fblgit/UNA-TheBeagle-7b-v1
- base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-ebgdloh-GGUF
base_model: mergekit-community/mergekit-slerp-ebgdloh
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-ebgdloh-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-ebgdloh-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-ebgdloh](https://huggingface.co/mergekit-community/mergekit-slerp-ebgdloh)
## Description
[MaziyarPanahi/mergekit-slerp-ebgdloh-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-ebgdloh-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-ebgdloh](https://huggingface.co/mergekit-community/mergekit-slerp-ebgdloh).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF | mradermacher | "2024-05-06T06:00:27Z" | 1,715 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:cloudyu/Mixtral_Erotic_13Bx2_MOE_22B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-03-24T15:17:29Z" | ---
base_model: cloudyu/Mixtral_Erotic_13Bx2_MOE_22B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/cloudyu/Mixtral_Erotic_13Bx2_MOE_22B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.Q2_K.gguf) | Q2_K | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.IQ3_XS.gguf) | IQ3_XS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.IQ3_S.gguf) | IQ3_S | 9.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.Q3_K_S.gguf) | Q3_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.IQ3_M.gguf) | IQ3_M | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.Q3_K_M.gguf) | Q3_K_M | 10.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.Q3_K_L.gguf) | Q3_K_L | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.IQ4_XS.gguf) | IQ4_XS | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.Q4_0.gguf) | Q4_0 | 12.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.IQ4_NL.gguf) | IQ4_NL | 12.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.Q4_K_S.gguf) | Q4_K_S | 12.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.Q4_K_M.gguf) | Q4_K_M | 13.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.Q5_K_S.gguf) | Q5_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.Q5_K_M.gguf) | Q5_K_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.Q6_K.gguf) | Q6_K | 17.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_Erotic_13Bx2_MOE_22B-GGUF/resolve/main/Mixtral_Erotic_13Bx2_MOE_22B.Q8_0.gguf) | Q8_0 | 23.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MaziyarPanahi/TopEvolution-DPO-32K-GGUF | MaziyarPanahi | "2024-06-15T14:22:42Z" | 1,715 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"base_model:mergekit-community/TopEvolution",
"base_model:mpasila/Kunoichi-DPO-v2-Instruct-32k-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/TopEvolution-DPO-32K"
] | text-generation | "2024-06-15T14:01:25Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- base_model:mergekit-community/TopEvolution
- base_model:mpasila/Kunoichi-DPO-v2-Instruct-32k-7B
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: TopEvolution-DPO-32K-GGUF
base_model: mergekit-community/TopEvolution-DPO-32K
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/TopEvolution-DPO-32K-GGUF](https://huggingface.co/MaziyarPanahi/TopEvolution-DPO-32K-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/TopEvolution-DPO-32K](https://huggingface.co/mergekit-community/TopEvolution-DPO-32K)
## Description
[MaziyarPanahi/TopEvolution-DPO-32K-GGUF](https://huggingface.co/MaziyarPanahi/TopEvolution-DPO-32K-GGUF) contains GGUF format model files for [mergekit-community/TopEvolution-DPO-32K](https://huggingface.co/mergekit-community/TopEvolution-DPO-32K).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
textattack/bert-base-uncased-snli | textattack | "2021-05-20T07:48:06Z" | 1,714 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | Entry not found |
legraphista/aya-23-8B-IMat-GGUF | legraphista | "2024-05-26T00:17:38Z" | 1,714 | 0 | gguf | [
"gguf",
"quantized",
"GGUF",
"imatrix",
"quantization",
"imat",
"static",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereForAI/aya-23-8B",
"license:cc-by-nc-4.0",
"region:us"
] | text-generation | "2024-05-25T20:21:19Z" | ---
base_model: CohereForAI/aya-23-8B
inference: false
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
library_name: gguf
license: cc-by-nc-4.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- imatrix
- quantization
- imat
- static
---
# aya-23-8B-IMat-GGUF
_Llama.cpp imatrix quantization of CohereForAI/aya-23-8B_
Original Model: [CohereForAI/aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B)
Original dtype: `FP16` (`float16`)
Quantized by: llama.cpp [b2998](https://github.com/ggerganov/llama.cpp/releases/tag/b2998)
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [aya-23-8B.Q8_0.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q8_0.gguf) | Q8_0 | 8.54GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q6_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q6_K.gguf) | Q6_K | 6.60GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q4_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q4_K.gguf) | Q4_K | 5.06GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q3_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q3_K.gguf) | Q3_K | 4.22GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q2_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q2_K.gguf) | Q2_K | 3.44GB | ✅ Available | 🟢 Yes | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [aya-23-8B.FP16.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.FP16.gguf) | F16 | 16.07GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q5_K.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q5_K.gguf) | Q5_K | 5.80GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q5_K_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q5_K_S.gguf) | Q5_K_S | 5.67GB | ✅ Available | ⚪ No | 📦 No
| [aya-23-8B.Q4_K_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q4_K_S.gguf) | Q4_K_S | 4.83GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q3_K_L.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q3_K_L.gguf) | Q3_K_L | 4.53GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q3_K_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q3_K_S.gguf) | Q3_K_S | 3.87GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.Q2_K_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.Q2_K_S.gguf) | Q2_K_S | 3.25GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ4_NL.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ4_NL.gguf) | IQ4_NL | 4.81GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ4_XS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ4_XS.gguf) | IQ4_XS | 4.60GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ3_M.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ3_M.gguf) | IQ3_M | 3.99GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ3_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ3_S.gguf) | IQ3_S | 3.89GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ3_XS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ3_XS.gguf) | IQ3_XS | 3.72GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ3_XXS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ3_XXS.gguf) | IQ3_XXS | 3.41GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ2_M.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ2_M.gguf) | IQ2_M | 3.08GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ2_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ2_S.gguf) | IQ2_S | 2.90GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ2_XS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ2_XS.gguf) | IQ2_XS | 2.80GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ2_XXS.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ2_XXS.gguf) | IQ2_XXS | 2.59GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ1_M.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ1_M.gguf) | IQ1_M | 2.35GB | ✅ Available | 🟢 Yes | 📦 No
| [aya-23-8B.IQ1_S.gguf](https://huggingface.co/legraphista/aya-23-8B-IMat-GGUF/blob/main/aya-23-8B.IQ1_S.gguf) | IQ1_S | 2.21GB | ✅ Available | 🟢 Yes | 📦 No
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download legraphista/aya-23-8B-IMat-GGUF --include "aya-23-8B.Q8_0.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/aya-23-8B-IMat-GGUF --include "aya-23-8B.Q8_0/*" --local-dir aya-23-8B.Q8_0
# see FAQ for merging GGUF's
```
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `aya-23-8B.Q8_0`)
3. Run `gguf-split --merge aya-23-8B.Q8_0/aya-23-8B.Q8_0-00001-of-XXXXX.gguf aya-23-8B.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |
AJosh/medical-mistral3 | AJosh | "2024-06-28T12:21:15Z" | 1,714 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | "2024-06-25T02:47:56Z" | ---
license: apache-2.0
---
|
h2oai/h2ogpt-oasst1-512-20b | h2oai | "2023-04-26T16:58:30Z" | 1,713 | 38 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"open-source",
"en",
"dataset:h2oai/openassistant_oasst1",
"dataset:h2oai/openassistant_oasst1_h2ogpt",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-18T01:36:16Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
datasets:
- h2oai/openassistant_oasst1
- h2oai/openassistant_oasst1_h2ogpt
---
# h2oGPT Model Card
## Summary
H2O.ai's `h2ogpt-oasst1-512-20b` is a 20 billion parameter instruction-following large language model licensed for commercial use.
- Base model: [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b)
- Fine-tuning dataset: [h2oai/openassistant_oasst1](https://huggingface.co/datasets/h2oai/openassistant_oasst1) and [h2oai/openassistant_oasst1_h2ogpt](https://huggingface.co/datasets/h2oai/openassistant_oasst1_h2ogpt)
- Data-prep and fine-tuning code: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt)
- Training logs: [zip](https://huggingface.co/h2oai/h2ogpt-oasst1-512-20b/blob/main/gpt-neox-20b.openassistant_oasst1.json.6.0_epochs.5a14ea8b3794c0d60476fc262d0a297f98dd712d.1013.zip) and [zip](https://huggingface.co/h2oai/h2ogpt-oasst1-512-20b/blob/main/h2ogpt-oasst1-512-20b.h2oaiopenassistant_oasst1_h2ogpt.2_epochs.fcaae7ef70600de8c97c9b38cb3f0075467cdad1.3.zip)
## Chatbot
- Run your own chatbot: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt)
[](https://github.com/h2oai/h2ogpt)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
```bash
pip install transformers==4.28.1
pip install accelerate==0.18.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(model="h2oai/h2ogpt-oasst1-512-20b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/h2oai/h2ogpt-oasst1-512-20b/blob/main/h2oai_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-oasst1-512-20b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("h2oai/h2ogpt-oasst1-512-20b", torch_dtype=torch.bfloat16, device_map="auto")
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50432, 6144)
(layers): ModuleList(
(0-43): 44 x GPTNeoXLayer(
(input_layernorm): LayerNorm((6144,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((6144,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=6144, out_features=18432, bias=True)
(dense): Linear(in_features=6144, out_features=6144, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=6144, out_features=24576, bias=True)
(dense_4h_to_h): Linear(in_features=24576, out_features=6144, bias=True)
(act): FastGELUActivation()
)
)
)
(final_layer_norm): LayerNorm((6144,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=6144, out_features=50432, bias=False)
)
```
## Model Configuration
```json
GPTNeoXConfig {
"_name_or_path": "h2oai/h2ogpt-oasst1-512-20b",
"architectures": [
"GPTNeoXForCausalLM"
],
"attention_probs_dropout_prob": 0,
"bos_token_id": 0,
"custom_pipeline": {
"text-generation": {
"impl": "h2oai_pipeline.H2OTextGenerationPipeline",
"pt": "AutoModelForCausalLM"
}
},
"custom_pipelines": {
"text-generation": {
"impl": "h2oai_pipeline.H2OTextGenerationPipeline",
"pt": "AutoModelForCausalLM"
}
},
"eos_token_id": 0,
"hidden_act": "gelu_fast",
"hidden_dropout_prob": 0,
"hidden_size": 6144,
"initializer_range": 0.02,
"intermediate_size": 24576,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 2048,
"model_type": "gpt_neox",
"num_attention_heads": 64,
"num_hidden_layers": 44,
"rotary_emb_base": 10000,
"rotary_pct": 0.25,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.28.1",
"use_cache": true,
"use_parallel_residual": true,
"vocab_size": 50432
}
```
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
[eval source code](https://github.com/h2oai/h2ogpt/issues/35#issuecomment-1521119301)
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|hellaswag | 0|acc |0.5419|± |0.0050|
| | |acc_norm|0.7259|± |0.0045|
|boolq | 1|acc |0.7125|± |0.0079|
|piqa | 0|acc |0.7742|± |0.0098|
| | |acc_norm|0.7775|± |0.0097|
|openbookqa | 0|acc |0.2800|± |0.0201|
| | |acc_norm|0.4000|± |0.0219|
|arc_challenge| 0|acc |0.3993|± |0.0143|
| | |acc_norm|0.4420|± |0.0145|
|winogrande | 0|acc |0.6614|± |0.0133|
|arc_easy | 0|acc |0.7327|± |0.0091|
| | |acc_norm|0.6894|± |0.0095|
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
WangZeJun/bloom-820m-chat | WangZeJun | "2023-06-15T03:45:15Z" | 1,713 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-06-02T03:07:49Z" | ---
license: bigscience-bloom-rail-1.0
---
https://github.com/zejunwang1/bloom_tuning
可以通过如下代码调用 bloom-820m-chat 模型来生成对话:
```python
from transformers import BloomTokenizerFast, BloomForCausalLM
model_name_or_path = "WangZeJun/bloom-820m-chat"
tokenizer = BloomTokenizerFast.from_pretrained(model_name_or_path)
model = BloomForCausalLM.from_pretrained(model_name_or_path).cuda()
model = model.eval()
input_pattern = "{}</s>"
text = "你好"
input_ids = tokenizer(input_pattern.format(text), return_tensors="pt").input_ids
input_ids = input_ids.cuda()
outputs = model.generate(input_ids, do_sample=True, max_new_tokens=1024, top_p=0.85,
temperature=0.3, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
input_ids_len = input_ids.size(1)
response_ids = outputs[0][input_ids_len:]
response = tokenizer.decode(response_ids)
print(response)
```
|
RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf | RichardErkhov | "2024-06-02T10:10:19Z" | 1,713 | 1 | null | [
"gguf",
"arxiv:2404.07965",
"region:us"
] | null | "2024-06-02T08:26:03Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
rho-math-1b-interpreter-v0.1 - GGUF
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [rho-math-1b-interpreter-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q2_K.gguf) | Q2_K | 0.4GB |
| [rho-math-1b-interpreter-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [rho-math-1b-interpreter-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [rho-math-1b-interpreter-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [rho-math-1b-interpreter-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [rho-math-1b-interpreter-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q3_K.gguf) | Q3_K | 0.51GB |
| [rho-math-1b-interpreter-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [rho-math-1b-interpreter-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [rho-math-1b-interpreter-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [rho-math-1b-interpreter-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q4_0.gguf) | Q4_0 | 0.59GB |
| [rho-math-1b-interpreter-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [rho-math-1b-interpreter-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [rho-math-1b-interpreter-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q4_K.gguf) | Q4_K | 0.62GB |
| [rho-math-1b-interpreter-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [rho-math-1b-interpreter-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q4_1.gguf) | Q4_1 | 0.65GB |
| [rho-math-1b-interpreter-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q5_0.gguf) | Q5_0 | 0.71GB |
| [rho-math-1b-interpreter-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [rho-math-1b-interpreter-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q5_K.gguf) | Q5_K | 0.73GB |
| [rho-math-1b-interpreter-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [rho-math-1b-interpreter-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q5_1.gguf) | Q5_1 | 0.77GB |
| [rho-math-1b-interpreter-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q6_K.gguf) | Q6_K | 0.84GB |
| [rho-math-1b-interpreter-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/microsoft_-_rho-math-1b-interpreter-v0.1-gguf/blob/main/rho-math-1b-interpreter-v0.1.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: mit
tags:
- nlp
- math
language:
- en
pipeline_tag: text-generation
---
<h1 align="center">
Rho-1: Not All Tokens Are What You Need
</h1>
<p align="center">
<a href="https://arxiv.org/abs/2404.07965"><b>[📜 Arxiv]</b></a> •
<a href="https://huggingface.co/papers/2404.07965"><b>[💬 HF Paper]</b></a> •
<a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[🤗 Models]</b></a> •
<a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/acc_vs_tokens_1b_7b.png?raw=true" width="1000">
<br>
<em>Figure 1: Rho-1 is pre-trained with Selective Language Modeling (SLM). SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.</em>
</p>
## 🔥 News
- [2024/04/12] 🔥🔥🔥 Rho-Math-v0.1 models released at 🤗 HuggingFace!
- [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively — matching DeepSeekMath with only 3\% of the pretraining tokens.
- [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
- [Rho-Math-7B-Interpreter](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) achieves 52% on MATH dataset, using only 69k samples for fine-tuning.
- [2024/04/11] Rho-1 paper and repo released.
## 💡 Introduction
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
### Selective Lanugage Modeling (SLM)
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/example.png?raw=true" width="1000">
<br>
<em>Figure 2:
<b>Upper:</b> Even an extensively filtered pretraining corpus contains token-level noise.
<b>Left:</b> Previous Causal Language Modeling (CLM) trains on all tokens.
<b>Right:</b> Our proposed Selective Language Modeling (SLM) selectively applies loss on those useful and clean tokens.</em>
</p>
<p align="center">
<img src="https://github.com/microsoft/rho/blob/main/docs/static/images/pipeline.png?raw=true" width="1000">
<br>
<em>Figure 3: <b>The pipeline of Selective Language Modeling.</b>
SLM optimizes language model performance by concentrating on valuable, clean tokens during pre-training.
It involves three steps:
(Step 1) Initially, train a reference model on high-quality data.
(Step 2) Then, score each token's loss in a corpus using the reference model.
(Step 3) Finally, train the language model selectively on tokens that show higher excess loss compared to the reference loss.</em>
</p>
<!-- results: -->
### Evaluation Results
Base models (Few-shot CoT):
| **Model** | **Size** | **Data** | **Uniq. Token** | **Train Token** | **GSM8K** | **MATH** | **MMLU STEM** | **SAT** |
|:-----------------:|:--------:|:--------:|:---------------:|:---------------:|:---------:|:--------:|:-------------:|:--------:|
| 1-2B Base Models | | | | | | | | |
| Qwen1.5 | 1.8B | - | - | - | 36.1 | 6.8 | 31.3 | 40.6 |
| Gemma | 2.0B | - | - | - | 18.8 | 11.4 | **34.4** | 50.0 |
| DeepSeekMath | 1.3B | - | 120B | 150B | 23.8 | 13.6 | 33.1 | **56.3** |
| [Rho-Math-1B-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1) | 1.1B | OWM | 14B | 30B | **36.2** | **15.6** | 23.3 | 28.1 |
| >= 7B Base Models | | | | | | | | |
| Mistral | 7B | | - | - | 41.2 | 11.6 | 49.5 | 59.4 |
| Minerva | 540B | - | 39B | 26B | 58.8 | 33.6 | **63.9** | - |
| LLemma | 34B | PPile | 55B | 50B | 54.2 | 23.0 | 54.7 | 68.8 |
| InternLM2-Math | 20B | - | 31B | 125B | 65.4 | 30.0 | 53.1 | 71.9 |
| DeepSeekMath | 7B | - | 120B | 500B | 64.1 | **34.2** | 56.4 | **84.4** |
| [Rho-Math-7B-v0.1](https://huggingface.co/microsoft/rho-math-7b-v0.1) | 7B | OWM | 14B | 10.5B | **66.9** | 31.0 | 54.6 | **84.4** |
[Tool-integrated reasoning](https://github.com/microsoft/ToRA) (Code Interpreter):
| **Model** | **Size** | **SFT Data** | **GSM8k** | **MATH** | **SVAMP** | **ASDiv** | **MAWPS** | **TabMWP** | **GSM-Hard** | **AVG** |
|------------------------------|----------|--------------|-----------|----------|-----------|-----------|-----------|------------|--------------|----------|
| gpt4-early (pal) | - | - | 94.2 | 51.8 | 94.8 | 92.6 | 97.7 | 95.9 | 77.6 | 86.4 |
| gpt-4-turbo-2024-04-09 (cot) | - | - | - | 73.4 | - | - | - | - | - |
| Open-Source Small Models | | | | | | | | | |
| MAmmoTH | 70B | MI-260k | 76.9 | 41.8 | 82.4 | - | - | - | - | - |
| ToRA | 7B | ToRA-69k | 68.8 | 40.1 | 68.2 | 73.9 | 88.8 | 42.4 | 54.6 | 62.4 |
| ToRA | 70B | ToRA-69k | 84.3 | 49.7 | **82.7** | 86.8 | 93.8 | 74.0 | **67.2** | **76.9** |
| DeepSeekMath | 7B | ToRA-69k | 79.8 | **52.0** | 80.1 | **87.1** | 93.8 | **85.8** | 63.1 | 77.4 |
| [Rho-Math-1B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) | 1B | ToRA-69k | 59.4 | 40.6 | 60.7 | 74.2 | 88.6 | 26.7 | 48.1 | 56.9 |
| [Rho-Math-7B-Interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) | 7B | ToRA-69k | 81.3 | **51.8** | 80.8 | 85.5 | **94.5** | 70.1 | 63.1 | 75.3 |
## 🚀 Quick Start
### Evaluation
```sh
git clone [email protected]:microsoft/rho.git
cd rho-1/math-evaluation-harness
```
Base model few-shot evaluation:
```sh
bash scripts/run_eval.sh cot microsoft/rho-math-7b-v0.1
```
SFT model (code-interpreter) evaluation:
```sh
bash scripts/run_eval.sh tora microsoft/rho-math-7b-interpreter-v0.1
```
Our reproduced outputs are provided in `rho-1/outputs.zip`.
## ☕️ Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{lin2024rho1,
title={Rho-1: Not All Tokens Are What You Need},
author={Zhenghao Lin and Zhibin Gou and Yeyun Gong and Xiao Liu and Yelong Shen and Ruochen Xu and Chen Lin and Yujiu Yang and Jian Jiao and Nan Duan and Weizhu Chen},
year={2024},
eprint={2404.07965},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
arve3210/hfmodel | arve3210 | "2024-06-27T03:13:57Z" | 1,713 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | "2024-06-26T23:13:41Z" | Entry not found |
microsoft/swin-large-patch4-window7-224 | microsoft | "2022-05-16T19:58:33Z" | 1,712 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"swin",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2103.14030",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (large-sized model)
Swin Transformer model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-large-patch4-window7-224")
model = SwinForImageClassification.from_pretrained("microsoft/swin-large-patch4-window7-224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Writer/palmyra-base | Writer | "2023-08-28T17:50:19Z" | 1,712 | 42 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"text generation",
"causal-lm",
"Writer-data",
"gpt",
"NeMo",
"palmyra",
"en",
"dataset:English",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-02-03T13:13:20Z" | ---
language:
- en
datasets:
- English
tags:
- text generation
- pytorch
- causal-lm
- Writer-data
- gpt
- NeMo
- palmyra
pipeline_tag: text-generation
library_name: transformers
license: apache-2.0
---
# Palmyra Base 5B
<style>
img {
display: inline;
}
</style>
|[](#model-architecture)|[](#model-architecture)|[](#datasets)
## Model Description
Palmyra Base was primarily pre-trained with English text. Note that there is still a trace amount of non-English data present within the training corpus that was accessed through CommonCrawl. A causal language modeling (CLM) objective was utilized during the process of the model's pretraining. Similar to GPT-3, Palmyra Base is a member of the same family of models that only contain a decoder. As a result, it was pre-trained utilizing the objective of self-supervised causal language modeling. Palmyra Base uses the prompts and general experimental setup from GPT-3 in order to conduct its evaluation per GPT-3.
### Use case
Palmyra Base is extremely powerful while being extremely fast. This model excels at many nuanced tasks such as sentiment classification and summarization.
## Training data
Palmyra Base (5b) was trained on Writer’s custom dataset.
## Intended Use and Limitations
Palmyra Base learns an inner representation of the English language that can be used to extract features useful for downstream tasks. However, the model is best at what it was pre-trained for which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("Writer/palmyra-base", torch_dtype=torch.float16).cuda()
# the fast tokenizer currently does not work correctly
tokenizer = AutoTokenizer.from_pretrained("Writer/palmyra-base", use_fast=False)
```
### Limitations and Biases
Palmyra Base’s core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting Palmyra Base, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on Palmyra Base to produce factually correct results.
Palmyra Base was trained on Writer’s custom data. As with all language models, it is difficult to predict how Palmyra Base will respond to specific prompts, and offensive content may appear unexpectedly. We recommend that the outputs be curated or filtered by humans before they are released, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
Evaluation of Palmyra-base model on the SuperGLUE benchmark
| Task | Metric | Value |
|------------|--------|-------|
| boolq | acc | 64.43 |
| cb | acc | 10.71 |
| | f1 | 08.32 |
| copa | acc | 76.00 |
| multirc | acc | 01.26 |
| record | f1 | 84.02 |
| | em | 83.29 |
| wic | acc | 50.00 |
| wsc | acc | 36.54 |
## Citation and Related Information
To cite this model:
```
@misc{Palmyra,
author = {Writer Engineering team},
title = {{Palmyra-base Parameter Autoregressive Language Model}},
howpublished = {\url{https://dev.writer.com}},
year = 2023,
month = January
}
``` |
vngrs-ai/VBART-Large-Paraphrasing | vngrs-ai | "2024-03-23T08:35:35Z" | 1,712 | 0 | transformers | [
"transformers",
"tf",
"safetensors",
"mbart",
"text2text-generation",
"tr",
"dataset:vngrs-ai/vngrs-web-corpus",
"arxiv:2403.01308",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-03-06T18:36:27Z" | ---
language:
- tr
inference:
parameters:
max_new_tokens: 128
arXiv: 2403.01308
library_name: transformers
pipeline_tag: text2text-generation
license: cc-by-nc-sa-4.0
datasets:
- vngrs-ai/vngrs-web-corpus
---
# VBART Model Card
## Model Description
VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from scratch on a large scale. It was pre-trained by VNGRS in February 2023.
The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
This repository contains fine-tuned TensorFlow and Safetensors weights of VBART for sentence-level text paraphrasing task.
- **Developed by:** [VNGRS-AI](https://vngrs.com/ai/)
- **Model type:** Transformer encoder-decoder based on mBART architecture
- **Language(s) (NLP):** Turkish
- **License:** CC BY-NC-SA 4.0
- **Finetuned from:** VBART-Large
- **Paper:** [arXiv](https://arxiv.org/abs/2403.01308)
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("vngrs-ai/VBART-Large-Paraphrasing",
model_input_names=['input_ids', 'attention_mask'])
# Uncomment the device_map kwarg and delete the closing bracket to use model for inference on GPU
model = AutoModelForSeq2SeqLM.from_pretrained("vngrs-ai/VBART-Large-Paraphrasing")#, device_map="auto")
input_text="..."
token_input = tokenizer(input_text, return_tensors="pt")#.to('cuda')
outputs = model.generate(**token_input)
print(tokenizer.decode(outputs[0]))
```
## Training Details
### Training Data
The base model is pre-trained on [vngrs-web-corpus](https://huggingface.co/datasets/vngrs-ai/vngrs-web-corpus). It is curated by cleaning and filtering Turkish parts of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4) datasets. These datasets consist of documents of unstructured web crawl data. More information about the dataset can be found on their respective pages. Data is filtered using a set of heuristics and certain rules, explained in the appendix of our [paper](https://arxiv.org/abs/2403.01308).
The fine-tuning dataset is a mixture of [OpenSubtitles](https://huggingface.co/datasets/open_subtitles), [TED Talks (2013)](https://wit3.fbk.eu/home) and [Tatoeba](https://tatoeba.org/en/) datasets.
### Limitations
This model is fine-tuned for paraphrasing tasks and finetuned in sentence level only. It is not intended to be used in any other case and can not be fine-tuned to any other task with full performance of the base model. It is also not guaranteed that this model will work without specified prompts.
### Training Procedure
Pre-trained for 30 days and for a total of 708B tokens. Finetuned for 20 epoch.
#### Hardware
- **GPUs**: 8 x Nvidia A100-80 GB
#### Software
- TensorFlow
#### Hyperparameters
##### Pretraining
- **Training regime:** fp16 mixed precision
- **Training objective**: Sentence permutation and span masking (using mask lengths sampled from Poisson distribution λ=3.5, masking 30% of tokens)
- **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
- **Scheduler**: Custom scheduler from the original Transformers paper (20,000 warm-up steps)
- **Dropout**: 0.1 (dropped to 0.05 and then to 0 in the last 165k and 205k steps, respectively)
- **Initial Learning rate**: 5e-6
- **Training tokens**: 708B
##### Fine-tuning
- **Training regime:** fp16 mixed precision
- **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
- **Scheduler**: Linear decay scheduler
- **Dropout**: 0.1
- **Learning rate**: 1e-5
- **Fine-tune epochs**: 20
#### Metrics

## Citation
```
@article{turker2024vbart,
title={VBART: The Turkish LLM},
author={Turker, Meliksah and Ari, Erdi and Han, Aydin},
journal={arXiv preprint arXiv:2403.01308},
year={2024}
}
``` |
timm/vit_base_patch32_clip_384.openai_ft_in12k_in1k | timm | "2023-05-06T00:04:30Z" | 1,711 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:wit-400m",
"dataset:imagenet-12k",
"arxiv:2212.07143",
"arxiv:2103.00020",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-11-11T08:13:25Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- wit-400m
- imagenet-12k
---
# Model card for vit_base_patch32_clip_384.openai_ft_in12k_in1k
A Vision Transformer (ViT) image classification model. Pretrained on WIT-400M image-text pairs by OpenAI using CLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 88.3
- GMACs: 12.7
- Activations (M): 12.1
- Image size: 384 x 384
- **Papers:**
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:**
- WIT-400M
- ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch32_clip_384.openai_ft_in12k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch32_clip_384.openai_ft_in12k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 145, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
csitfun/llama-7b-logicot | csitfun | "2023-06-08T07:31:39Z" | 1,710 | 7 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"logical",
"en",
"dataset:csitfun/LogiCoT",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-06-04T13:15:31Z" | ---
license: cc-by-sa-4.0
datasets:
- csitfun/LogiCoT
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- logical
---
This model is tuned on the **LogiCoT** data and the GPT-4 alpaca data with the **LLaMa-7b** model.
We use 2 A100 GPUs
We first instruction-tuning LLaMa-7b on the GPT-4 alpaca data for 3 days, then on the LogiCoT data for 4 days. |
John6666/ebara-mfcg-pony-mix-v12-sdxl | John6666 | "2024-06-17T22:26:37Z" | 1,710 | 2 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-17T22:21:37Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://civitai.com/models/466637/ebaramfcgponymix?modelVersionId=579053).
|
Echelon-AI/medbotlm-v0.2 | Echelon-AI | "2024-06-20T15:11:51Z" | 1,710 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:ruslanmv/Medical-Llama3-8B",
"base_model:HPAI-BSC/Llama3-Aloe-8B-Alpha",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-19T17:47:39Z" | ---
base_model:
- ruslanmv/Medical-Llama3-8B
- HPAI-BSC/Llama3-Aloe-8B-Alpha
library_name: transformers
tags:
- mergekit
- merge
license: llama3
---
# llama3-medbotlm-v0.3
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [HPAI-BSC/Llama3-Aloe-8B-Alpha](https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha) as a base.
### Models Merged
The following models were included in the merge:
* [ruslanmv/Medical-Llama3-8B](https://huggingface.co/ruslanmv/Medical-Llama3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ruslanmv/Medical-Llama3-8B
parameters:
weight: 0.50
- model: HPAI-BSC/Llama3-Aloe-8B-Alpha
parameters:
weight: 0.50
base_model: HPAI-BSC/Llama3-Aloe-8B-Alpha
merge_method: task_arithmetic
dtype: bfloat16
``` |
timm/caformer_s36.sail_in1k | timm | "2023-05-05T05:50:54Z" | 1,709 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-05-05T05:50:16Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for caformer_s36.sail_in1k
A CAFormer (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 39.3
- GMACs: 8.0
- Activations (M): 37.5
- Image size: 224 x 224
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('caformer_s36.sail_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_s36.sail_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 320, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_s36.sail_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
|
RWKV/rwkv-raven-14b | RWKV | "2023-05-15T10:09:50Z" | 1,709 | 56 | transformers | [
"transformers",
"pytorch",
"rwkv",
"text-generation",
"dataset:EleutherAI/pile",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-05T12:24:03Z" | ---
datasets:
- EleutherAI/pile
---

# Model card for RWKV-4 | 14B parameters chat version (Raven)
RWKV is a project led by [Bo Peng](https://github.com/BlinkDL). Learn more about the model architecture in the blogposts from Johan Wind [here](https://johanwind.github.io/2023/03/23/rwkv_overview.html) and [here](https://johanwind.github.io/2023/03/23/rwkv_details.html). Learn more about the project by joining the [RWKV discord server](https://discordapp.com/users/468093332535640064).
# Table of contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Citation](#citation)
## TL;DR
Below is the description from the [original repository](https://github.com/BlinkDL/RWKV-LM)
> RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). It's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
## Model Details
The details of the architecture can be found on the blogpost mentioned above and the Hugging Face blogpost of the integration.
## Usage
### Convert the raw weights to the HF format
You can use the [`convert_rwkv_checkpoint_to_hf.py`](https://github.com/huggingface/transformers/tree/main/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py) script by specifying the repo_id of the original weights, the filename and the output directory. You can also optionally directly push the converted model on the Hub by passing `--push_to_hub` flag and `--model_name` argument to specify where to push the converted weights.
```bash
python convert_rwkv_checkpoint_to_hf.py --repo_id RAW_HUB_REPO --checkpoint_file RAW_FILE --output_dir OUTPUT_DIR --push_to_hub --model_name dummy_user/converted-rwkv
```
### Generate text
You can use the `AutoModelForCausalLM` and `AutoTokenizer` classes to generate texts from the model. Expand the sections below to understand how to run the model in different scenarios:
The "Raven" models needs to be prompted in a specific way, learn more about that [in the integration blogpost](https://huggingface.co/blog/rwkv).
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-14b")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-14b")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
### Running the model on a single GPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-14b").to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-14b")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
</details>
### Running the model in half-precision, on GPU
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-14b", torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-14b")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
### Running the model multiple GPUs
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-14b", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-14b")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
## Citation
If you use this model, please consider citing the original work, from the original repo [here](https://github.com/BlinkDL/ChatRWKV/) |
RWKV/rwkv-raven-7b | RWKV | "2023-05-15T10:09:24Z" | 1,709 | 19 | transformers | [
"transformers",
"pytorch",
"rwkv",
"text-generation",
"dataset:EleutherAI/pile",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-05T12:50:19Z" | ---
datasets:
- EleutherAI/pile
---

# Model card for RWKV-4 | 7B parameters chat version (Raven)
RWKV is a project led by [Bo Peng](https://github.com/BlinkDL). Learn more about the model architecture in the blogposts from Johan Wind [here](https://johanwind.github.io/2023/03/23/rwkv_overview.html) and [here](https://johanwind.github.io/2023/03/23/rwkv_details.html). Learn more about the project by joining the [RWKV discord server](https://discordapp.com/users/468093332535640064).
# Table of contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Citation](#citation)
## TL;DR
Below is the description from the [original repository](https://github.com/BlinkDL/RWKV-LM)
> RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). It's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
## Model Details
The details of the architecture can be found on the blogpost mentioned above and the Hugging Face blogpost of the integration.
## Usage
### Convert the raw weights to the HF format
You can use the [`convert_rwkv_checkpoint_to_hf.py`](https://github.com/huggingface/transformers/tree/main/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py) script by specifying the repo_id of the original weights, the filename and the output directory. You can also optionally directly push the converted model on the Hub by passing `--push_to_hub` flag and `--model_name` argument to specify where to push the converted weights.
```bash
python convert_rwkv_checkpoint_to_hf.py --repo_id RAW_HUB_REPO --checkpoint_file RAW_FILE --output_dir OUTPUT_DIR --push_to_hub --model_name dummy_user/converted-rwkv
```
### Generate text
You can use the `AutoModelForCausalLM` and `AutoTokenizer` classes to generate texts from the model. Expand the sections below to understand how to run the model in different scenarios:
The "Raven" models needs to be prompted in a specific way, learn more about that [in the integration blogpost](https://huggingface.co/blog/rwkv).
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-7b")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-7b")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
### Running the model on a single GPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-7b").to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-7b")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
</details>
### Running the model in half-precision, on GPU
<details>
<summary> Click to expand </summary>
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-7b", torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-7b")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
### Running the model multiple GPUs
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-7b", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-7b")
prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
</details>
## Citation
If you use this model, please consider citing the original work, from the original repo [here](https://github.com/BlinkDL/ChatRWKV/) |
elinas/chronos-70b-v2 | elinas | "2024-03-23T23:19:36Z" | 1,709 | 15 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"chat",
"roleplay",
"storywriting",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-03T05:08:04Z" | ---
license: cc-by-nc-4.0
tags:
- chat
- roleplay
- storywriting
---
# chronos-70b-v2
This is the FP16 PyTorch / HF version of **chronos-70b-v2** based on the **Llama v2 Base** model. This version will **not fit on a consumer GPU**, use a quantized type of model from those linked below!
Big thank you to the Pygmalion team for providing compute. Reach out to me if you would like individual credit.
This model is primarily focused on chat, roleplay, storywriting, with significantly improved reasoning and logic. It does not have any form of censorship, please use responsibly.
Chronos can generate very long outputs with coherent text, largely due to the human inputs it was trained on, and it supports context length up to 4096 tokens.
## License
This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only which takes priority over the **LLAMA 2 COMMUNITY LICENSE AGREEMENT**. If you'd like to discuss using it for your business, contact Elinas through Discord **elinas**, or X (Twitter) **@officialelinas**.
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences.
At the moment, only 70b models released will be under this license and the terms may change at any time (ie. a more permissive license allowing commercial use).
## Model Usage
This model uses Alpaca formatting, so for optimal model performance, use it to start the dialogue or story, and if you use a frontend like SillyTavern ENABLE Alpaca instruction mode:
```
### Instruction:
Your instruction or question here.
### Response:
```
Not using the format will make the model perform significantly worse than intended.
## Tips
Sampling and settings can make a significant difference for this model, so play around with them. I was also informed by a user that if you are using **KoboldCPP** that using the flag
`--unbantokens` may improve model performance **significantly**. This has not been tested by myself, but that is something to keep in mind.
## Quantized Versions for Consumer GPU Usage
[LlamaCPP Versions provided by @TheBloke](https://huggingface.co/TheBloke/Chronos-70B-v2-GGUF)
[GPTQ Quantized Versions provided by @TheBloke](https://huggingface.co/TheBloke/Chronos-70B-v2-GPTQ)
**Support Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>
|
leptonai/TinyLlama-1.1B-Chat-v1.0-4heads | leptonai | "2024-05-14T06:33:01Z" | 1,709 | 0 | transformers | [
"transformers",
"safetensors",
"endpoints_compatible",
"region:us"
] | null | "2024-05-14T06:32:56Z" | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.