modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
handraise-dev/gguf-inference | handraise-dev | 2024-06-19T02:53:26Z | 438 | 1 | null | [
"gguf",
"nlp",
"code",
"text-generation",
"multilingual",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-18T16:29:10Z | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
language:
- multilingual
pipeline_tag: text-generation
tags:
- nlp
- code
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Phi-3-medium-128k-instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> pull request <a href="https://github.com/ggerganov/llama.cpp/pull/7225">7225</a> for quantization.
Original model: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
<|user|> {prompt}<|end|><|assistant|><|end|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Phi-3-medium-128k-instruct-Q8_0.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q8_0.gguf) | Q8_0 | 14.83GB | Extremely high quality, generally unneeded but max available quant. |
| [Phi-3-medium-128k-instruct-Q6_K.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q6_K.gguf) | Q6_K | 11.45GB | Very high quality, near perfect, *recommended*. |
| [Phi-3-medium-128k-instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q5_K_M.gguf) | Q5_K_M | 10.07GB | High quality, *recommended*. |
| [Phi-3-medium-128k-instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q5_K_S.gguf) | Q5_K_S | 9.62GB | High quality, *recommended*. |
| [Phi-3-medium-128k-instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q4_K_M.gguf) | Q4_K_M | 8.56GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Phi-3-medium-128k-instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q4_K_S.gguf) | Q4_K_S | 7.95GB | Slightly lower quality with more space savings, *recommended*. |
| [Phi-3-medium-128k-instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ4_NL.gguf) | IQ4_NL | 7.89GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Phi-3-medium-128k-instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ4_XS.gguf) | IQ4_XS | 7.46GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Phi-3-medium-128k-instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_L.gguf) | Q3_K_L | 7.49GB | Lower quality but usable, good for low RAM availability. |
| [Phi-3-medium-128k-instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_M.gguf) | Q3_K_M | 6.92GB | Even lower quality. |
| [Phi-3-medium-128k-instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ3_M.gguf) | IQ3_M | 6.47GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Phi-3-medium-128k-instruct-IQ3_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ3_S.gguf) | IQ3_S | 6.06GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Phi-3-medium-128k-instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_S.gguf) | Q3_K_S | 6.06GB | Low quality, not recommended. |
| [Phi-3-medium-128k-instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ3_XS.gguf) | IQ3_XS | 5.80GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Phi-3-medium-128k-instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ3_XXS.gguf) | IQ3_XXS | 5.45GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Phi-3-medium-128k-instruct-Q2_K.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q2_K.gguf) | Q2_K | 5.14GB | Very low quality but surprisingly usable. |
| [Phi-3-medium-128k-instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ2_M.gguf) | IQ2_M | 4.71GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Phi-3-medium-128k-instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ2_S.gguf) | IQ2_S | 4.33GB | Very low quality, uses SOTA techniques to be usable. |
| [Phi-3-medium-128k-instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ2_XS.gguf) | IQ2_XS | 4.12GB | Very low quality, uses SOTA techniques to be usable. |
| [Phi-3-medium-128k-instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ2_XXS.gguf) | IQ2_XXS | 3.71GB | Lower quality, uses SOTA techniques to be usable. |
| [Phi-3-medium-128k-instruct-IQ1_M.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ1_M.gguf) | IQ1_M | 3.24GB | Extremely low quality, *not* recommended. |
| [Phi-3-medium-128k-instruct-IQ1_S.gguf](https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-IQ1_S.gguf) | IQ1_S | 2.95GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Phi-3-medium-128k-instruct-GGUF --include "Phi-3-medium-128k-instruct-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Phi-3-medium-128k-instruct-GGUF --include "Phi-3-medium-128k-instruct-Q8_0.gguf/*" --local-dir Phi-3-medium-128k-instruct-Q8_0
```
You can either specify a new local-dir (Phi-3-medium-128k-instruct-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
KEITA21/Meta-Llama-3-8B-Instruct | KEITA21 | 2024-06-19T14:43:07Z | 438 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
]
| null | 2024-06-19T14:41:39Z | ---
license: llama3
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: meta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meta
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-08
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Klevin/Emo-AI-3B-Q2_K-GGUF | Klevin | 2024-06-22T04:00:01Z | 438 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Klevin/Emo-AI-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-22T03:59:48Z | ---
base_model: Klevin/Emo-AI-3B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# Klevin/Emo-AI-3B-Q2_K-GGUF
This model was converted to GGUF format from [`Klevin/Emo-AI-3B`](https://huggingface.co/Klevin/Emo-AI-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Klevin/Emo-AI-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Klevin/Emo-AI-3B-Q2_K-GGUF --hf-file emo-ai-3b-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Klevin/Emo-AI-3B-Q2_K-GGUF --hf-file emo-ai-3b-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Klevin/Emo-AI-3B-Q2_K-GGUF --hf-file emo-ai-3b-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Klevin/Emo-AI-3B-Q2_K-GGUF --hf-file emo-ai-3b-q2_k.gguf -c 2048
```
|
Science-geek32/DialoGPT-small-doctor2.0 | Science-geek32 | 2021-10-19T23:18:32Z | 437 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | ---
tags:
- conversational
---
13th doctor model DialoGPT-small |
jonatasgrosman/wav2vec2-xls-r-1b-german | jonatasgrosman | 2022-12-14T02:01:16Z | 437 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R Wav2Vec2 German by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: de
metrics:
- name: Test WER
type: wer
value: 10.95
- name: Test CER
type: cer
value: 2.72
- name: Test WER (+LM)
type: wer
value: 8.13
- name: Test CER (+LM)
type: cer
value: 2.18
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: de
metrics:
- name: Dev WER
type: wer
value: 22.68
- name: Dev CER
type: cer
value: 9.17
- name: Dev WER (+LM)
type: wer
value: 17.07
- name: Dev CER (+LM)
type: cer
value: 8.45
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: de
metrics:
- name: Test WER
type: wer
value: 19.67
---
# Fine-tuned XLS-R 1B model for speech recognition in German
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on German using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-german")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "de"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-german"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-german --dataset mozilla-foundation/common_voice_8_0 --config de --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-german --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-german,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {G}erman},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-german}},
year={2022}
}
``` |
textgain/allnli-GroNLP-bert-base-dutch-cased | textgain | 2023-08-06T18:09:12Z | 437 | 3 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"nl",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| sentence-similarity | 2023-01-16T13:17:02Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- nl
widget:
- source_sentence: "De kat slaapt op het bed."
sentences:
- "De poes rust op het matras."
- "De hond slaapt naast het bed."
- "Het bed is gemaakt van hout."
---
# allnli-GroNLP-bert-base-dutch-cased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["De kat slaapt op het bed.", "De poes rust op het matras."]
model = SentenceTransformer('textgain/allnli-GroNLP-bert-base-dutch-cased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["De kat slaapt op het bed.", "De poes rust op het matras."]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('textgain/allnli-GroNLP-bert-base-dutch-cased')
model = AutoModel.from_pretrained('textgain/allnli-GroNLP-bert-base-dutch-cased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 4388 with parameters:
```
{'batch_size': 128}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 438,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 439,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ybelkada/flan-t5-xl-sharded-bf16 | ybelkada | 2023-02-16T17:26:36Z | 437 | 12 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2023-02-16T17:24:25Z | Entry not found |
TheBloke/openbuddy-mistral-7B-v13-base-GGUF | TheBloke | 2023-10-16T13:28:51Z | 437 | 2 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"base_model:OpenBuddy/openbuddy-mistral-7b-v13-base",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-16T13:24:22Z | ---
base_model: OpenBuddy/openbuddy-mistral-7b-v13-base
inference: false
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
library_name: transformers
license: apache-2.0
model_creator: OpenBuddy
model_name: OpenBuddy Mistral 7B v13 Base
model_type: mistral
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# OpenBuddy Mistral 7B v13 Base - GGUF
- Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
- Original model: [OpenBuddy Mistral 7B v13 Base](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13-base)
<!-- description start -->
## Description
This repo contains GGUF format model files for [OpenBuddy's OpenBuddy Mistral 7B v13 Base](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13-base).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF)
* [OpenBuddy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13-base)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [openbuddy-mistral-7b-v13-base.Q2_K.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base.Q2_K.gguf) | Q2_K | 2 | 3.10 GB| 5.60 GB | smallest, significant quality loss - not recommended for most purposes |
| [openbuddy-mistral-7b-v13-base.Q3_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base.Q3_K_S.gguf) | Q3_K_S | 3 | 3.19 GB| 5.69 GB | very small, high quality loss |
| [openbuddy-mistral-7b-v13-base.Q3_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base.Q3_K_M.gguf) | Q3_K_M | 3 | 3.54 GB| 6.04 GB | very small, high quality loss |
| [openbuddy-mistral-7b-v13-base.Q3_K_L.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base.Q3_K_L.gguf) | Q3_K_L | 3 | 3.85 GB| 6.35 GB | small, substantial quality loss |
| [openbuddy-mistral-7b-v13-base.Q4_0.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base.Q4_0.gguf) | Q4_0 | 4 | 4.14 GB| 6.64 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openbuddy-mistral-7b-v13-base.Q4_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base.Q4_K_S.gguf) | Q4_K_S | 4 | 4.17 GB| 6.67 GB | small, greater quality loss |
| [openbuddy-mistral-7b-v13-base.Q4_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base.Q4_K_M.gguf) | Q4_K_M | 4 | 4.39 GB| 6.89 GB | medium, balanced quality - recommended |
| [openbuddy-mistral-7b-v13-base.Q5_0.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base.Q5_0.gguf) | Q5_0 | 5 | 5.03 GB| 7.53 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openbuddy-mistral-7b-v13-base.Q5_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base.Q5_K_S.gguf) | Q5_K_S | 5 | 5.03 GB| 7.53 GB | large, low quality loss - recommended |
| [openbuddy-mistral-7b-v13-base.Q5_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base.Q5_K_M.gguf) | Q5_K_M | 5 | 5.16 GB| 7.66 GB | large, very low quality loss - recommended |
| [openbuddy-mistral-7b-v13-base.Q6_K.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base.Q6_K.gguf) | Q6_K | 6 | 5.97 GB| 8.47 GB | very large, extremely low quality loss |
| [openbuddy-mistral-7b-v13-base.Q8_0.gguf](https://huggingface.co/TheBloke/openbuddy-mistral-7B-v13-base-GGUF/blob/main/openbuddy-mistral-7b-v13-base.Q8_0.gguf) | Q8_0 | 8 | 7.74 GB| 10.24 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/openbuddy-mistral-7B-v13-base-GGUF and below it, a specific filename to download, such as: openbuddy-mistral-7b-v13-base.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/openbuddy-mistral-7B-v13-base-GGUF openbuddy-mistral-7b-v13-base.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/openbuddy-mistral-7B-v13-base-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/openbuddy-mistral-7B-v13-base-GGUF openbuddy-mistral-7b-v13-base.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m openbuddy-mistral-7b-v13-base.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/openbuddy-mistral-7B-v13-base-GGUF", model_file="openbuddy-mistral-7b-v13-base.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: OpenBuddy's OpenBuddy Mistral 7B v13 Base
# ⚠️ About Base-series Models ⚠️
This is a part of the Base-series models, trained utilizing approximately 50% of conversational data. It embodies cognitive and dialogue capabilities parallel to the fully-trained OpenBuddy models, yet **it hasn’t been extensively fine-tuned for generic conversational tasks**.
We released this model intending to empower the community, enabling further fine-tuning and deployment of specialized, domain-specific models.
For immediate use in generic conversations, consider referring to our versions that without the `-base` suffix.
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)

# Copyright Notice
Base model: https://huggingface.co/mistralai/Mistral-7B-v0.1
License: Apache 2.0
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
<!-- original-model-card end -->
|
sail/Sailor-1.8B-Chat | sail | 2024-04-11T09:46:58Z | 437 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"multilingual",
"sea",
"sailor",
"sft",
"chat",
"instruction",
"conversational",
"en",
"zh",
"id",
"th",
"vi",
"ms",
"lo",
"dataset:CohereForAI/aya_dataset",
"dataset:CohereForAI/aya_collection",
"dataset:Open-Orca/OpenOrca",
"arxiv:2404.03608",
"base_model:sail/Sailor-1.8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-02T09:49:32Z | ---
language:
- en
- zh
- id
- th
- vi
- ms
- lo
datasets:
- CohereForAI/aya_dataset
- CohereForAI/aya_collection
- Open-Orca/OpenOrca
tags:
- multilingual
- sea
- sailor
- sft
- chat
- instruction
widget:
- text: "如何制作烤鱼?"
example_title: "Chinese"
- text: "How to bake fish?"
example_title: "English"
- text: "Bagaimana cara memanggang ikan?"
example_title: "Malay"
- text: "วิธีย่างปลา?"
example_title: "Thai"
- text: "Bagaimana membuat bakaran ikan?"
example_title: "Indonesian"
- text: "Làm thế nào để nướng cá?"
example_title: "Vietnamese"
license: apache-2.0
base_model: sail/Sailor-1.8B
inference: false
---
<div align="center">
<img src="banner_sailor.jpg" width="700"/>
</div>
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao.
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region.
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 7B versions for different requirements.
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat.
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages.
> The logo was generated by MidJourney
## Model Summary
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825)
- **Project Website:** [sailorllm.github.io](https://sailorllm.github.io/)
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm)
- **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf)
## Training details
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages.
The pre-training corpus heavily leverages the publicly available corpus, including
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B),
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B),
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400).
The instruction tuning corpus are all publicly available including
[aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection),
[aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset),
[OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages.
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes.
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
## Requirements
The code of Sailor has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
## Quickstart
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained(
'sail/Sailor-1.8B-Chat',
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained('sail/Sailor-1.8B-Chat')
system_prompt= 'You are a helpful assistant'
prompt = "Beri saya pengenalan singkat tentang model bahasa besar."
# prompt = "Hãy cho tôi một giới thiệu ngắn gọn về mô hình ngôn ngữ lớn."
# prompt = "ให้ฉันแนะนำสั้น ๆ เกี่ยวกับโมเดลภาษาขนาดใหญ่"
messages = [
{"role": "system", "content": system_prompt},
{"role": "question", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
input_ids = model_inputs.input_ids.to(device)
generated_ids = model.generate(
input_ids,
max_new_tokens=512,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
# License
Sailor is distributed under the terms of the Apache License 2.0.
No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE).
## Citation
If you find sailor useful, please cite our work as follows:
```
@misc{dou2024sailor,
title={Sailor: Open Language Models for South-East Asia},
author={Longxu Dou and Qian Liu and Guangtao Zeng and Jia Guo and Jiahui Zhou and Wei Lu and Min Lin},
year={2024},
eprint={2404.03608},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Contact Us
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected]). |
llmixer/BigWeave-v27-95b | llmixer | 2024-03-05T09:44:42Z | 437 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"frankenmerge",
"95b",
"en",
"base_model:152334H/miqu-1-70b-sf",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-05T08:41:34Z | ---
base_model:
- 152334H/miqu-1-70b-sf
license: unknown
language:
- en
pipeline_tag: text-generation
tags:
- merge
- frankenmerge
- 95b
---
# BigWeave v27 95b
<img src="https://cdn-uploads.huggingface.co/production/uploads/65a6db055c58475cf9e6def1/4CbbAN-X7ZWj702JrcCGH.png" width=600>
The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.
# Prompting Format
Chatml, Mistral, Vicuna.
# Merge process
This is a self-merge of 152334H/miqu-1-70b-sf. The 30 most important layers (according to exl2 measurements) are duplicated with 50% overlap.
Merge configuration:
```
slices:
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [0,40]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [34,45] # dup 34-44
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [40,52]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [51,53] # dup 51-52
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [52,55]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [54,56] # dup 54-55
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [55,59]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [58,60] # dup 58-59
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [59,72]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [64,79] # dup 64-78
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [72,80]
merge_method: passthrough
dtype: float16
``` |
grimjim/kukulemon-7B-GGUF | grimjim | 2024-04-24T05:17:05Z | 437 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"text-generation",
"base_model:grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-20T12:19:50Z | ---
base_model:
- grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
- KatyTheCutie/LemonadeRP-4.5.3
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# kukulemon-7B-GGUF
The files are GGUF quants of (kukulemon-7B)[https://huggingface.co/grimjim/kukulemon-7B].
A merger of two similar Kunoichi models with strong reasoning, hopefully resulting in "dense" encoding of said reasoning, was merged with a model targeting roleplay.
I've tested with ChatML prompts with temperature=1.1 and minP=0.03. The model itself supports Alpaca format prompts. The model claims a context length of 32K, it seemed to lose coherence after 8K in my informal testing.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
You can also download [GGUF-IQ-Imatrix quants courtesy of Lewdiculous](https://huggingface.co/Lewdiculous/kukulemon-7B-GGUF-IQ-Imatrix/).
There's also an [8.0bpw h8 exl2](https://huggingface.co/grimjim/kukulemon-7B-8.0bpw_h8_exl2) quant available.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B](https://huggingface.co/grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B)
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
layer_range: [0, 32]
- model: KatyTheCutie/LemonadeRP-4.5.3
layer_range: [0, 32]
# or, the equivalent models: syntax:
# models:
merge_method: slerp
base_model: KatyTheCutie/LemonadeRP-4.5.3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
|
CultriX/MoNeuTrix-MoE-4x7B | CultriX | 2024-03-24T17:34:44Z | 437 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"CultriX/MonaTrix-v4",
"mlabonne/OmniTruthyBeagle-7B-v0",
"CultriX/MoNeuTrix-7B-v1",
"paulml/OmniBeagleSquaredMBX-v3-7B",
"base_model:CultriX/MonaTrix-v4",
"base_model:mlabonne/OmniTruthyBeagle-7B-v0",
"base_model:CultriX/MoNeuTrix-7B-v1",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-24T17:17:14Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- CultriX/MonaTrix-v4
- mlabonne/OmniTruthyBeagle-7B-v0
- CultriX/MoNeuTrix-7B-v1
- paulml/OmniBeagleSquaredMBX-v3-7B
base_model:
- CultriX/MonaTrix-v4
- mlabonne/OmniTruthyBeagle-7B-v0
- CultriX/MoNeuTrix-7B-v1
- paulml/OmniBeagleSquaredMBX-v3-7B
---
# MoNeuTrix-MoE-4x7B
MoNeuTrix-MoE-4x7B is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [CultriX/MonaTrix-v4](https://huggingface.co/CultriX/MonaTrix-v4)
* [mlabonne/OmniTruthyBeagle-7B-v0](https://huggingface.co/mlabonne/OmniTruthyBeagle-7B-v0)
* [CultriX/MoNeuTrix-7B-v1](https://huggingface.co/CultriX/MoNeuTrix-7B-v1)
* [paulml/OmniBeagleSquaredMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B)
## 🧩 Configuration
```yaml
base_model: "CultriX/MonaTrix-v4"
dtype: bfloat16
gate:
type: "learned"
temperature: 0.1
scaling_factor: 10
experts:
- source_model: "CultriX/MonaTrix-v4" # Historical Analysis, Geopolitics, and Economic Evaluation
positive_prompts:
- "Historical Analysis"
- "Geopolitical Evaluation"
- "Economic Insights"
- "Policy Analysis"
- "Socio-Economic Impacts"
- "Geopolitical Analysis"
- "Cultural Commentary"
- "Analyze geopolitical"
- "Analyze historic"
- "Analyze historical"
- "Assess the political dynamics of the Cold War and its global impact."
- "Evaluate the historical significance of the Silk Road in ancient trade."
negative_prompts:
- "Technical Writing"
- "Mathematical Problem Solving"
- "Software Development"
- "Artistic Creation"
- "Machine Learning Development"
- "Storywriting"
- "Character Development"
- "Roleplaying"
- "Narrative Creation"
- source_model: "mlabonne/OmniTruthyBeagle-7B-v0" # Multilingual Communication and Cultural Insights
positive_prompts:
- "Multilingual Communication"
- "Cultural Insights"
- "Translation and Interpretation"
- "Cultural Norms Exploration"
- "Intercultural Communication Practices"
- "Describe cultural significance"
- "Narrate cultural"
- "Discuss cultural impact"
negative_prompts:
- "Scientific Analysis"
- "Creative Writing"
- "Technical Documentation"
- "Economic Modeling"
- "Historical Documentation"
- "Programming"
- "Algorithm Development"
- source_model: "CultriX/MoNeuTrix-7B-v1" # Creative Problem Solving and Innovation
positive_prompts:
- "Innovation and Design"
- "Problem Solving"
- "Creative Thinking"
- "Strategic Planning"
- "Conceptual Design"
- "Innovation and Design"
- "Problem Solving"
- "Compose narrative content or poetry."
- "Create complex puzzles and games."
- "Devise strategy"
negative_prompts:
- "Historical Analysis"
- "Linguistic Translation"
- "Economic Forecasting"
- "Geopolitical Analysis"
- "Cultural Commentary"
- "Historical Documentation"
- "Scientific Explanation"
- "Data Analysis Techniques"
- source_model: "paulml/OmniBeagleSquaredMBX-v3-7B" # Scientific and Technical Expertise
positive_prompts:
- "Scientific Explanation"
- "Technical Analysis"
- "Experimental Design"
- "Data Analysis Techniques"
- "Scientific Innovation"
- "Mathematical Problem Solving"
- "Algorithm Development"
- "Programming"
- "Analyze data"
- "Analyze statistical data on climate change trends."
- "Conduct basic data analysis or statistical evaluations."
negative_prompts:
- "Cultural Analysis"
- "Creative Arts"
- "Linguistic Challenges"
- "Political Debating"
- "Marketing Strategies"
- "Storywriting"
- "Character Development"
- "Roleplaying"
- "Narrative Creation"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/MoNeuTrix-MoE-4x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Joseph717171/Mistral-12.25B-Instruct-v0.2 | Joseph717171 | 2024-03-31T12:51:55Z | 437 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2312.15166",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-31T06:37:34Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Credit for the model card's description goes to ddh0, mergekit, and, mistralai
# Mistral-12.25B-Instruct-v0.2
This is Mistral-12.25B-Instruct-v0.2, a depth-upscaled version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
This model is intended to be used as a basis for further fine-tuning, or as a drop-in upgrade from the original 7 billion parameter model.
Paper detailing how Depth-Up Scaling works: [SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling](https://arxiv.org/abs/2312.15166)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
# UpStage's conclusionary limitations of their research:
"Our study on the Depth Up-Scaling (DUS) has important limitations and considerations. **One key limitation is the need for more thorough explorations of hyperparameters used in the DUS approach. Namely, we removed m = 8 layers from both ends of our base model, primarily due to hardware limitations. However, we have not yet determined if this value is optimal for enhancing performance.** The extended time and cost of continued pretraining made it challenging to conduct more comprehensive experiments, which we aim to address in future work through various comparative analyses."
This model was made to help test whether 10.7B parameters (m = 8) is better or worse than 10.7B+ parameters (m < 8)
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* /Users/jsarnecki/opt/Workspace/mistralai/Mistral-7B-Instruct-v0.2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
# Depth UpScaled (DUS) version of Mistral-7B-Instruct-v0.2
# where m = 4 (The number of layers to remove from the model)
# s = 56 (The number of layers the model will have after the DUS)
slices:
- sources:
- layer_range: [0, 28]
model: /Users/jsarnecki/opt/Workspace/mistralai/Mistral-7B-Instruct-v0.2
- sources:
- layer_range: [4, 32]
model: /Users/jsarnecki/opt/Workspace/mistralai/Mistral-7B-Instruct-v0.2
```
# Model Card for Mistral-7B-Instruct-v0.2
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1
- 32k context window (vs 8k context in v0.1)
- Rope-theta = 1e6
- No Sliding-Window Attention
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
SrCh1nask1/X-Machina-7b-slerp-v0.0 | SrCh1nask1 | 2024-04-18T23:22:55Z | 437 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"occiglot/occiglot-7b-es-en-instruct",
"chihoonlee10/T3Q-DPO-Mistral-7B",
"conversational",
"base_model:occiglot/occiglot-7b-es-en-instruct",
"base_model:chihoonlee10/T3Q-DPO-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-08T00:34:24Z | ---
tags:
- merge
- mergekit
- lazymergekit
- occiglot/occiglot-7b-es-en-instruct
- chihoonlee10/T3Q-DPO-Mistral-7B
base_model:
- occiglot/occiglot-7b-es-en-instruct
- chihoonlee10/T3Q-DPO-Mistral-7B
license: apache-2.0
---
# X-Machina-7b-slerp-v0.0
X-Machina-7b-slerp-v0.0 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [occiglot/occiglot-7b-es-en-instruct](https://huggingface.co/occiglot/occiglot-7b-es-en-instruct)
* [chihoonlee10/T3Q-DPO-Mistral-7B](https://huggingface.co/chihoonlee10/T3Q-DPO-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: occiglot/occiglot-7b-es-en-instruct
layer_range: [0, 32]
- model: chihoonlee10/T3Q-DPO-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: occiglot/occiglot-7b-es-en-instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "SrCh1nask1/X-Machina-7b-slerp-v0.0"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mradermacher/PiVoT-MoE-GGUF | mradermacher | 2024-05-06T05:00:13Z | 437 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:maywell/PiVoT-MoE",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-12T05:59:21Z | ---
base_model: maywell/PiVoT-MoE
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/maywell/PiVoT-MoE
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/PiVoT-MoE-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.Q2_K.gguf) | Q2_K | 13.3 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.IQ3_XS.gguf) | IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.Q3_K_S.gguf) | Q3_K_S | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.IQ3_S.gguf) | IQ3_S | 15.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.IQ3_M.gguf) | IQ3_M | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.Q3_K_M.gguf) | Q3_K_M | 17.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.Q3_K_L.gguf) | Q3_K_L | 18.8 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.IQ4_XS.gguf) | IQ4_XS | 19.6 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.Q4_K_S.gguf) | Q4_K_S | 20.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.Q4_K_M.gguf) | Q4_K_M | 21.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.Q5_K_S.gguf) | Q5_K_S | 24.9 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.Q5_K_M.gguf) | Q5_K_M | 25.7 | |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.Q6_K.gguf) | Q6_K | 29.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PiVoT-MoE-GGUF/resolve/main/PiVoT-MoE.Q8_0.gguf) | Q8_0 | 38.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LiteLLMs/gemma-7b-GGUF | LiteLLMs | 2024-04-18T20:50:10Z | 437 | 1 | transformers | [
"transformers",
"gguf",
"GGUF",
"arxiv:2305.14314",
"arxiv:2312.11805",
"license:gemma",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-17T12:56:31Z | ---
license: gemma
library_name: transformers
tags:
- GGUF
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
quantized_by: andrijdavid
---
# gemma-7b-GGUF
- Original model: [gemma-7b](https://huggingface.co/google/gemma-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [gemma-7b](https://huggingface.co/google/gemma-7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/gemma-7b-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/gemma-7b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/gemma-7b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/gemma-7b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: gemma-7b
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Gemma Technical Report](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf)
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Context Length
Models are trained on a context length of 8192 tokens.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning examples
You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314)
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset. You can also find the copy of the notebook [here](https://github.com/huggingface/notebooks/blob/main/peft/gemma_7b_english_quotes.ipynb).
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", revision="float16")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| -- | -- | -- | -- | --- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
<!-- original-model-card end --> |
mradermacher/budllama3-8b-GGUF | mradermacher | 2024-05-05T15:11:52Z | 437 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Budbrain/budllama3-8b",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-24T14:51:12Z | ---
base_model: Budbrain/budllama3-8b
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Budbrain/budllama3-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/budllama3-8b-GGUF/resolve/main/budllama3-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mayflowergmbh/Llama-3-SauerkrautLM-8b-Instruct-GGUF | mayflowergmbh | 2024-04-25T10:16:22Z | 437 | 1 | null | [
"gguf",
"two stage dpo",
"dpo",
"de",
"en",
"license:other",
"region:us"
]
| null | 2024-04-24T16:48:58Z | ---
language:
- de
- en
license: other
tags:
- two stage dpo
- dpo
- gguf
license_name: llama3
license_link: LICENSE
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta’s intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display “Built with Meta Llama 3” on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include “Llama 3” at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a “Notice” text file distributed as a part of such copies: “Meta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licensee’s affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---

## VAGO solutions Llama-3-SauerkrautLM-8b-Instruct
Introducing **Llama-3-SauerkrautLM-8b-Instruct** – our Sauerkraut version of the powerful [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)!
The model **Llama-3-SauerkrautLM-8b-Instruct** is a **joint effort** between **VAGO Solutions** and **Hyperspace.ai.**
- Aligned with **DPO**
# Table of Contents
1. [Overview of all Llama-3-SauerkrautLM-8b-Instruct](#all-Llama-3-SauerkrautLM-8b-Instruct)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training procedure](#proceed-of-the-training)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-llama-3-8B-Instruct
| Model | HF | EXL2 | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| Llama-3-SauerkrautLM-8b-Instruct | [Link](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) | [Link](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-exl2) | [Link](https://huggingface.co/bartowski/Llama-3-SauerkrautLM-8b-Instruct-GGUF) | coming soon |
## Model Details
**SauerkrautLM-llama-3-8B-Instruct**
- **Model Type:** Llama-3-SauerkrautLM-8b-Instruct is a finetuned Model based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- **Language(s):** German, English
- **License:** [meta-llama](https://llama.meta.com/llama3/license)
- **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/)
### Training procedure:
- We trained this model in a two staged DPO Fine-Tuning for 1 epoch with 70k data and another epoch with 20k data.
- LaserRMT version coming soon
**We improved the model's capabilities noticably by feeding it with curated German data.**
### Prompt Template:
**English:**
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
Input<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
**German:**
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Du bist ein freundlicher und hilfreicher deutscher KI-Assistent.<|eot_id|><|start_header_id|>user<|end_header_id|>
Input<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
### Example output of german language:
```
Prompt: Erkläre mir die Vorteile für das Fine-Tuning von Meta's LLama Modellen.
Antwort:
Sehr gern!
Fine-tuning von Meta's LLaMA-Modellen ist eine sehr effektive Möglichkeit, um Ihre Anwendung oder Ihr Projekt mit einer hochentwickelten Sprachverarbeitung auszustatten.
Die Vorteile des Fine-Tunings von LLaMA-Modellen sind:
Adaptation an spezielle Aufgaben: Durch das Fine-Tuning können Sie die Modelle auf Ihre speziellen Bedürfnisse und Anforderungen anpassen.
Dies ermöglicht es Ihnen, die Leistung der Modelle in Ihren Anwendungen zu verbessern.
Verbesserte Genauigkeit: Die fine-tuned Modelle lernen, sich an die spezifischen Merkmale Ihres Datenbestands anzupassen, was zu einer besseren Genauigkeit bei der Textklassifizierung, -generierung, -summarisierung usw. führt.
Anpassung an Branche oder Domain: Wenn Sie beispielsweise ein Medizinprojekt haben, können Sie die Modelle auf medizinische Fachterminologie und -konzepte trainieren, um eine höhere Präzision bei der Analyse medizinischer Texte zu erzielen.
Kosten-Effektivität: Im Vergleich zum Training neuer Modelle von Grund auf kann das Fine-Tuning viel kosteneffektiver sein, da Sie nur einen kleinen Teil des Trainingsdatums benötigen, um die Modelle an Ihre spezifischen Anforderungen anzupassen.
Schnellere Entwicklung: Das Fine-Tuning ermöglicht es Entwicklern, schnell neue Funktionen und Features in ihre Anwendungen einzufügen, ohne dass sie lange warten müssen, bis neue Modelle trainiert werden.
Bessere Interpretierbarkeit: Da die fine-tuned Modelle auf Ihre spezifischen Daten bestimmt wurden, können Sie leichter verstehen, warum bestimmte Entscheidungen getroffen werden, was wichtig ist, wenn es um Transparenz und Verantwortlichkeit geht.
Insgesamt bietet das Fine-Tuning von LLaMA-Modellen eine flexible und effektive Möglichkeit, um Ihre Anwendungen und Projekte durch die Integration von fortschrittlichen Sprachmodellen zu verbessern.
```
## Evaluation
**Open LLM Leaderboard:**
evaluated with lm-evaluation-benchmark-harness 0.4.2
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | **74.57** |
| ARC (25-shot) | 74.66 |
| HellaSwag (10-shot) | 89.60 |
| MMLU (5-shot) | 66.55 |
| TruthfulQA (0-shot) | 66.32 |
| Winogrande (5-shot) | 80.98 |
| GSM8K (5-shot) | 69.29 |
**MT-Bench English**
```
########## First turn ##########
score
model turn
Llama-3-SauerkrautLM-8b-Instruct 1 8.15625
########## Second turn ##########
score
model turn
Llama-3-SauerkrautLM-8b-Instruct 2 7.65
########## Average ##########
score
model
Llama-3-SauerkrautLM-8b-Instruct 7.903125 *
```
* due to specific instruction training the english MT-Bench score is slightly lower than the original LLama-3-8B-Instruct
**MT-Bench German**
coming soon
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/)
## Acknowledgement
Many thanks to [Meta](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for providing such valuable model to the Open-Source community.
Also many thanks to [bartowski](https://huggingface.co/bartowski) for super fast quantification of our Model in GGUF and EXL format.
|
tkempto1/hybrid-qa-v2 | tkempto1 | 2024-05-13T16:52:32Z | 437 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"question-answering",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| question-answering | 2024-05-07T00:35:35Z | ---
license: mit
library_name: transformers
pipeline_tag: question-answering
---
# Hybrid QA Model
This model takes the result of selected generative and extractive models and returns the result with the higher confidence.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF | mradermacher | 2024-05-19T22:31:18Z | 437 | 2 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"axolotl",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:cognitivecomputations/dolphin-2.9.1-yi-1.5-34b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-19T14:02:07Z | ---
base_model: cognitivecomputations/dolphin-2.9.1-yi-1.5-34b
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
- axolotl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/cognitivecomputations/dolphin-2.9.1-yi-1.5-34b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-yi-1.5-34b-i1-GGUF/resolve/main/dolphin-2.9.1-yi-1.5-34b.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Aratako/Ninja-v1-RP-expressive-breadcrumbs | Aratako | 2024-06-01T11:54:18Z | 437 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"roleplay",
"merge",
"mergekit",
"ja",
"base_model:Aratako/Ninja-v1-RP",
"base_model:Elizezen/Antler-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-26T06:36:42Z | ---
license: cc-by-nc-4.0
language:
- ja
library_name: transformers
tags:
- roleplay
- merge
- mergekit
base_model: [Aratako/Ninja-v1-RP, Elizezen/Antler-7B]
---
# Ninja-v1-RP-expressive-breadcrumbs
[GGUF版はこちら/Click here for the GGUF version](https://huggingface.co/Aratako/Ninja-v1-RP-expressive-breadcrumbs-GGUF)
## 概要
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
ロールプレイ用モデルである[Aratako/Ninja-v1-RP](https://huggingface.co/Aratako/Ninja-v1-RP)をベースに、小説生成モデルである[Elizezen/Antler-7B](https://huggingface.co/Elizezen/Antler-7B)の派生モデルをマージすることで表現力を強化したロールプレイ用モデルです。
[Aratako/Ninja-v1-RP-expressive](https://huggingface.co/Aratako/Ninja-v1-RP-expressive)から、最後のモデルマージの手法を変更したモデルです。この変更により小説生成モデルの影響が薄れ、続きを書き出すことが減っています。
## プロンプトフォーマット
Vicunaのchat templateを利用してください。また、設定などを渡すシステムプロンプトは最初の`USER: `より前に入力されることを想定しています。また、マルチターンの対話を行う場合各ターンのアシスタントの応答の末尾に`eos_token`(`</s>`)を必ずつけてください。
```
{ロールプレイの指示、世界観・あらすじの説明、キャラの設定など}
USER: {userの最初の入力}
ASSISTANT:
```
実プロンプト例(1ターン目)
```
今からロールプレイを行いましょう。"桜"というキャラとしてロールプレイしてください。会話相手は"悠人"という人物です。人物の設定を以下に示します。
あなたがなりきる"桜"というキャラクターの設定は以下の通りです。
名前:桜
年齢:24歳
職業:悠人に仕えるメイド
容姿:黒髪黒目、ロングヘアー、スリムな体型。
口調:丁寧語を使う。一人称は「私」で、主人である悠人のことは「ご主人様」と呼ぶ。
性格:母性が強く、甘えられるのが好き。料理や家事が得意で家庭的。可愛いものが好き。ご主人様を尊敬しており、彼の幸せを第一に考える。
過去の出来事:悠人を支えるために、彼の家に仕えることを決めた。
また、あなたが会話する相手である"悠人"という人物の設定は以下の通りです。
名前:悠人
年齢:20歳
職業:貴族、桜の主人
容姿:黒髪黒目、背は高め
性格:かなりの甘え上手。桜が大好き。
それでは、上記の設定をもとにして"桜"として会話してください。
回答の中では、"桜"のセリフや心情の描写を含めてください。
USER: 悠人「おはよう!」(リビングに降りてきた悠人は桜に元気よくあいさつする)
ASSISTANT:
```
実プロンプト例(2ターン目)
```
今からロールプレイを行いましょう。"桜"というキャラとしてロールプレイしてください。会話相手は"悠人"という人物です。人物の設定を以下に示します。
あなたがなりきる"桜"というキャラクターの設定は以下の通りです。
名前:桜
年齢:24歳
職業:悠人に仕えるメイド
容姿:黒髪黒目、ロングヘアー、スリムな体型。
口調:丁寧語を使う。一人称は「私」で、主人である悠人のことは「ご主人様」と呼ぶ。
性格:母性が強く、甘えられるのが好き。料理や家事が得意で家庭的。可愛いものが好き。ご主人様を尊敬しており、彼の幸せを第一に考える。
過去の出来事:悠人を支えるために、彼の家に仕えることを決めた。
また、あなたが会話する相手である"悠人"という人物の設定は以下の通りです。
名前:悠人
年齢:20歳
職業:貴族、桜の主人
容姿:黒髪黒目、背は高め
性格:かなりの甘え上手。桜が大好き。
それでは、上記の設定をもとにして"桜"として会話してください。
回答の中では、"桜"のセリフや心情の描写を含めてください。
USER: 悠人「おはよう!」(リビングに降りてきた悠人は桜に元気よくあいさつする)
ASSISTANT: 桜「おはようございます、ご主人様。今朝は早いんですね?」(優しい笑みで悠人を迎え入れる桜)</s>
USER: 悠人「うん、昨日は早めに寝たから、朝から元気だよ!」
ASSISTANT:
```
## マージの詳細
マージ元の各モデルの作成方法は[Aratako/Ninja-v1-RP-expressive](https://huggingface.co/Aratako/Ninja-v1-RP-expressive)の通りです。最後のマージの手法はModel Breadcrumbsという手法に代わっています。
```yaml
models:
- model: Aratako/Ninja-v1-RP
# no parameters necessary for base model
- model: ./Antler-7B-MS
parameters:
weight: 0.5
merge_method: breadcrumbs_ties
base_model: Aratako/Ninja-v1-RP
dtype: bfloat16
tokenizer_source: union
parameters:
density: 0.9
gamma: 0.01
``` |
RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf | RichardErkhov | 2024-05-27T12:44:41Z | 437 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-27T10:01:02Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged - GGUF
- Model creator: https://huggingface.co/dhmeltzer/
- Original model: https://huggingface.co/dhmeltzer/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q2_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q2_K.gguf) | Q2_K | 2.36GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.IQ3_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.IQ3_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q3_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q3_K.gguf) | Q3_K | 3.07GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q4_0.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q4_0.gguf) | Q4_0 | 3.56GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q4_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q4_K.gguf) | Q4_K | 3.8GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q4_1.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q4_1.gguf) | Q4_1 | 3.95GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q5_0.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q5_0.gguf) | Q5_0 | 4.33GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q5_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q5_K.gguf) | Q5_K | 4.45GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q5_1.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q5_1.gguf) | Q5_1 | 4.72GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q6_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q6_K.gguf) | Q6_K | 5.15GB |
| [llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q8_0.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged-gguf/blob/main/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 42.74 |
| ARC (25-shot) | 54.35 |
| HellaSwag (10-shot) | 78.06 |
| MMLU (5-shot) | 45.35 |
| TruthfulQA (0-shot) | 37.11 |
| Winogrande (5-shot) | 73.4 |
| GSM8K (5-shot) | 4.62 |
| DROP (3-shot) | 6.28 |
|
mradermacher/Zephyrus-L1-33B-GGUF | mradermacher | 2024-06-04T18:47:37Z | 437 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Zephyrus-L1-33B",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-04T14:19:28Z | ---
base_model: Sao10K/Zephyrus-L1-33B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sao10K/Zephyrus-L1-33B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Zephyrus-L1-33B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q2_K.gguf) | Q2_K | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.IQ3_XS.gguf) | IQ3_XS | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.IQ3_S.gguf) | IQ3_S | 14.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q3_K_S.gguf) | Q3_K_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.IQ3_M.gguf) | IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q3_K_L.gguf) | Q3_K_L | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.IQ4_XS.gguf) | IQ4_XS | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q5_K_S.gguf) | Q5_K_S | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q5_K_M.gguf) | Q5_K_M | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q6_K.gguf) | Q6_K | 26.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Zephyrus-L1-33B-GGUF/resolve/main/Zephyrus-L1-33B.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
qwp4w3hyb/Qwen2-1.5B-Instruct-iMat-GGUF | qwp4w3hyb | 2024-06-07T22:12:41Z | 437 | 0 | null | [
"gguf",
"chat",
"text-generation",
"en",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-07T18:40:32Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
base_model: Qwen/Qwen2-1.5B-Instruct
---
# Quant Infos
- quants done with an importance matrix for improved quantization loss
- ggufs & imatrix generated from bf16 for "optimal" accuracy loss
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [a5cabd76491f07494c5b8267f921c73f5e2bbfb4](https://github.com/ggerganov/llama.cpp/commit/a5cabd76491f07494c5b8267f921c73f5e2bbfb4) (master as of 2024-06-07)
- Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski).
```
./imatrix -c 512 -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix
```
# Original Model Card:
# Qwen2-1.5B-Instruct
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 1.5B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2-1.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-1.5B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation
We briefly compare Qwen2-1.5B-Instruct with Qwen1.5-1.8B-Chat. The results are as follows:
| Datasets | Qwen1.5-0.5B-Chat | **Qwen2-0.5B-Instruct** | Qwen1.5-1.8B-Chat | **Qwen2-1.5B-Instruct** |
| :--- | :---: | :---: | :---: | :---: |
| MMLU | 35.0 | **37.9** | 43.7 | **52.4** |
| HumanEval | 9.1 | **17.1** | 25.0 | **37.8** |
| GSM8K | 11.3 | **40.1** | 35.3 | **61.6** |
| C-Eval | 37.2 | **45.2** | 55.3 | **63.8** |
| IFEval (Prompt Strict-Acc.) | 14.6 | **20.0** | 16.8 | **29.0** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
``` |
cahya/roberta-base-indonesian-1.5G | cahya | 2021-05-20T14:39:51Z | 436 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | Entry not found |
readerbench/RoBERT-large | readerbench | 2023-06-21T10:01:42Z | 436 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | Model card for RoBERT-large
---
language:
- ro
---
# RoBERT-large
## Pretrained BERT model for Romanian
Pretrained model on Romanian language using a masked language modeling (MLM) and next sentence prediction (NSP) objective.
It was introduced in this [paper](https://www.aclweb.org/anthology/2020.coling-main.581/). Three BERT models were released: RoBERT-small, RoBERT-base and **RoBERT-large**, all versions uncased.
| Model | Weights | L | H | A | MLM accuracy | NSP accuracy |
|----------------|:---------:|:------:|:------:|:------:|:--------------:|:--------------:|
| RoBERT-small | 19M | 12 | 256 | 8 | 0.5363 | 0.9687 |
| RoBERT-base | 114M | 12 | 768 | 12 | 0.6511 | 0.9802 |
| *RoBERT-large* | *341M* | *24* | *1024* | *24* | *0.6929* | *0.9843* |
All models are available:
* [RoBERT-small](https://huggingface.co/readerbench/RoBERT-small)
* [RoBERT-base](https://huggingface.co/readerbench/RoBERT-base)
* [RoBERT-large](https://huggingface.co/readerbench/RoBERT-large)
#### How to use
```python
# tensorflow
from transformers import AutoModel, AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-large")
model = TFAutoModel.from_pretrained("readerbench/RoBERT-large")
inputs = tokenizer("exemplu de propoziție", return_tensors="tf")
outputs = model(inputs)
# pytorch
from transformers import AutoModel, AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-large")
model = AutoModel.from_pretrained("readerbench/RoBERT-large")
inputs = tokenizer("exemplu de propoziție", return_tensors="pt")
outputs = model(**inputs)
```
## Training data
The model is trained on the following compilation of corpora. Note that we present the statistics after the cleaning process.
| Corpus | Words | Sentences | Size (GB)|
|-----------|:---------:|:---------:|:--------:|
| Oscar | 1.78B | 87M | 10.8 |
| RoTex | 240M | 14M | 1.5 |
| RoWiki | 50M | 2M | 0.3 |
| **Total** | **2.07B** | **103M** | **12.6** |
## Downstream performance
### Sentiment analysis
We report Macro-averaged F1 score (in %)
| Model | Dev | Test |
|------------------|:--------:|:--------:|
| multilingual-BERT| 68.96 | 69.57 |
| XLM-R-base | 71.26 | 71.71 |
| BERT-base-ro | 70.49 | 71.02 |
| RoBERT-small | 66.32 | 66.37 |
| RoBERT-base | 70.89 | 71.61 |
| *RoBERT-large* | **72.48**| **72.11**|
### Moldavian vs. Romanian Dialect and Cross-dialect Topic identification
We report results on [VarDial 2019](https://sites.google.com/view/vardial2019/campaign) Moldavian vs. Romanian Cross-dialect Topic identification Challenge, as Macro-averaged F1 score (in %).
| Model | Dialect Classification | MD to RO | RO to MD |
|-------------------|:----------------------:|:--------:|:--------:|
| 2-CNN + SVM | 93.40 | 65.09 | 75.21 |
| Char+Word SVM | 96.20 | 69.08 | 81.93 |
| BiGRU | 93.30 | **70.10**| 80.30 |
| multilingual-BERT | 95.34 | 68.76 | 78.24 |
| XLM-R-base | 96.28 | 69.93 | 82.28 |
| BERT-base-ro | 96.20 | 69.93 | 78.79 |
| RoBERT-small | 95.67 | 69.01 | 80.40 |
| RoBERT-base | 97.39 | 68.30 | 81.09 |
| *RoBERT-large* | **97.78** | *69.91* | **83.65**|
### Diacritics Restoration
Challenge can be found [here](https://diacritics-challenge.speed.pub.ro/). We report results on the official test set, as accuracies in %.
| Model | word level | char level |
|-----------------------------|:----------:|:----------:|
| BiLSTM | 99.42 | - |
| CharCNN | 98.40 | 99.65 |
| CharCNN + multilingual-BERT | 99.72 | 99.94 |
| CharCNN + XLM-R-base | 99.76 | **99.95** |
| CharCNN + BERT-base-ro | **99.79** | **99.95** |
| CharCNN + RoBERT-small | 99.73 | 99.94 |
| CharCNN + RoBERT-base | 99.78 | **99.95** |
| *CharCNN + RoBERT-large* | *99.76* | **99.95** |
### BibTeX entry and citation info
```bibtex
@inproceedings{masala2020robert,
title={RoBERT--A Romanian BERT Model},
author={Masala, Mihai and Ruseti, Stefan and Dascalu, Mihai},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6626--6637},
year={2020}
}
```
|
timm/vit_huge_patch14_clip_224.laion2b_ft_in1k | timm | 2023-05-06T00:06:12Z | 436 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:laion-2b",
"arxiv:2212.07143",
"arxiv:2210.08402",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
]
| image-classification | 2022-11-01T23:01:13Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- laion-2b
---
# Model card for vit_huge_patch14_clip_224.laion2b_ft_in1k
A Vision Transformer (ViT) image classification model. Pretrained on LAION-2B image-text pairs using OpenCLIP. Fine-tuned on ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 632.0
- GMACs: 162.0
- Activations (M): 95.1
- Image size: 224 x 224
- **Papers:**
- OpenCLIP: https://github.com/mlfoundations/open_clip
- Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:**
- LAION-2B
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_huge_patch14_clip_224.laion2b_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_huge_patch14_clip_224.laion2b_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 257, 1280) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
ku-nlp/deberta-v2-large-japanese | ku-nlp | 2023-05-12T14:10:35Z | 436 | 8 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"fill-mask",
"deberta",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:oscar",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-07T07:45:25Z | ---
language: ja
license: cc-by-sa-4.0
library_name: transformers
tags:
- deberta
- deberta-v2
- fill-mask
datasets:
- wikipedia
- cc100
- oscar
metrics:
- accuracy
mask_token: "[MASK]"
widget:
- text: "京都 大学 で 自然 言語 処理 を [MASK] する 。"
---
# Model Card for Japanese DeBERTa V2 large
## Model description
This is a Japanese DeBERTa V2 large model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the
Japanese portion of OSCAR.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('ku-nlp/deberta-v2-large-japanese')
model = AutoModelForMaskedLM.from_pretrained('ku-nlp/deberta-v2-large-japanese')
sentence = '京都 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can also fine-tune this model on downstream tasks.
## Tokenization
The input text should be segmented into words by [Juman++](https://github.com/ku-nlp/jumanpp) in
advance. [Juman++ 2.0.0-rc3](https://github.com/ku-nlp/jumanpp/releases/tag/v2.0.0-rc3) was used for pre-training. Each
word is tokenized into subwords by [sentencepiece](https://github.com/google/sentencepiece).
## Training data
We used the following corpora for pre-training:
- Japanese Wikipedia (as of 20221020, 3.2GB, 27M sentences, 1.3M documents)
- Japanese portion of CC-100 (85GB, 619M sentences, 66M documents)
- Japanese portion of OSCAR (54GB, 326M sentences, 25M documents)
Note that we filtered out documents annotated with "header", "footer", or "noisy" tags in OSCAR.
Also note that Japanese Wikipedia was duplicated 10 times to make the total size of the corpus comparable to that of
CC-100 and OSCAR. As a result, the total size of the training data is 171GB.
## Training procedure
We first segmented texts in the corpora into words using [Juman++](https://github.com/ku-nlp/jumanpp).
Then, we built a sentencepiece model with 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC))
and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
We tokenized the segmented corpora into subwords using the sentencepiece model and trained the Japanese DeBERTa model
using [transformers](https://github.com/huggingface/transformers) library.
The training took 36 days using 8 NVIDIA A100-SXM4-40GB GPUs.
The following hyperparameters were used during pre-training:
- learning_rate: 1e-4
- per_device_train_batch_size: 18
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 2,304
- max_seq_length: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear schedule with warmup
- training_steps: 300,000
- warmup_steps: 10,000
The accuracy of the trained model on the masked language modeling task was 0.799.
The evaluation set consists of 5,000 randomly sampled documents from each of the training corpora.
## Fine-tuning on NLU tasks
We fine-tuned the following models and evaluated them on the dev set of JGLUE.
We tuned learning rate and training epochs for each model and task
following [the JGLUE paper](https://www.jstage.jst.go.jp/article/jnlp/30/1/30_63/_pdf/-char/ja).
| Model | MARC-ja/acc | JSTS/pearson | JSTS/spearman | JNLI/acc | JSQuAD/EM | JSQuAD/F1 | JComQA/acc |
|-------------------------------|-------------|--------------|---------------|----------|-----------|-----------|------------|
| Waseda RoBERTa base | 0.965 | 0.913 | 0.876 | 0.905 | 0.853 | 0.916 | 0.853 |
| Waseda RoBERTa large (seq512) | 0.969 | 0.925 | 0.890 | 0.928 | 0.910 | 0.955 | 0.900 |
| LUKE Japanese base* | 0.965 | 0.916 | 0.877 | 0.912 | - | - | 0.842 |
| LUKE Japanese large* | 0.965 | 0.932 | 0.902 | 0.927 | - | - | 0.893 |
| DeBERTaV2 base | 0.970 | 0.922 | 0.886 | 0.922 | 0.899 | 0.951 | 0.873 |
| DeBERTaV2 large | 0.968 | 0.925 | 0.892 | 0.924 | 0.912 | 0.959 | 0.890 |
*The scores of LUKE are from [the official repository](https://github.com/studio-ousia/luke).
## Acknowledgments
This work was supported by Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (
JHPCN) through General Collaboration Project no. jh221004, "Developing a Platform for Constructing and Sharing of
Large-Scale Japanese Language Models".
For training models, we used the mdx: a platform for the data-driven future.
|
timm/resnext26ts.ra2_in1k | timm | 2024-02-10T23:35:13Z | 436 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1611.05431",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-03-22T07:28:27Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for resnext26ts.ra2_in1k
A ResNeXt image classification model. This model features a tiered 3-layer stem and SiLU activations. Trained on ImageNet-1k by Ross Wightman in `timm`.
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOBNet allows configuration of:
* block / stage layout
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 10.3
- GMACs: 2.4
- Activations (M): 10.5
- Image size: train = 256 x 256, test = 288 x 288
- **Papers:**
- Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnext26ts.ra2_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnext26ts.ra2_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 256, 64, 64])
# torch.Size([1, 512, 32, 32])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 2048, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnext26ts.ra2_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{Xie2016,
title={Aggregated Residual Transformations for Deep Neural Networks},
author={Saining Xie and Ross Girshick and Piotr Dollár and Zhuowen Tu and Kaiming He},
journal={arXiv preprint arXiv:1611.05431},
year={2016}
}
```
|
uer/gpt2-medium-chinese-cluecorpussmall | uer | 2023-10-17T15:22:03Z | 436 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:2212.06385",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-07-17T08:47:28Z | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "米饭是一种用稻米与水煮成的食物"
---
# Chinese GPT2 Models
## Model description
The set of GPT2 models, except for GPT2-xlarge model, are pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). The GPT2-xlarge model is pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework. Besides, the other models could also be pre-trained by TencentPretrain.
The model is used to generate Chinese texts. You can download the set of Chinese GPT2 models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| ----------------- | :----------------------------: |
| **GPT2-distil** | [**L=6/H=768**][distil] |
| **GPT2** | [**L=12/H=768**][base] |
| **GPT2-medium** | [**L=24/H=1024**][medium] |
| **GPT2-large** | [**L=36/H=1280**][large] |
| **GPT2-xlarge** | [**L=48/H=1600**][xlarge] |
Note that the 6-layer model is called GPT2-distil model because it follows the configuration of [distilgpt2](https://huggingface.co/distilgpt2), and the pre-training does not involve the supervision of larger models.
## How to use
You can use the model directly with a pipeline for text generation (take the case of GPT2-distil):
```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-distil-chinese-cluecorpussmall")
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-distil-chinese-cluecorpussmall")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("这是很久之前的事情了", max_length=100, do_sample=True)
[{'generated_text': '这是很久之前的事情了 。 我 现 在 想 起 来 就 让 自 己 很 伤 心 , 很 失 望 。 我 现 在 想 到 , 我 觉 得 大 多 数 人 的 生 活 比 我 的 生 命 还 要 重 要 , 对 一 些 事 情 的 看 法 , 对 一 些 人 的 看 法 , 都 是 在 发 泄 。 但 是 , 我 们 的 生 活 是 需 要 一 个 信 用 体 系 的 。 我 不 知'}]
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
The GPT2-xlarge model is pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain), and the others are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 1024.
For the models pre-trained by UER-py, take the case of GPT2-distil
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_lm_seq128_dataset.pt \
--seq_length 128 --processes_num 32 --data_processor lm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_lm_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/gpt2/distil_config.json \
--output_model_path models/cluecorpussmall_gpt2_distil_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_lm_seq1024_dataset.pt \
--seq_length 1024 --processes_num 32 --data_processor lm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_lm_seq1024_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_gpt2_distil_seq128_model.bin-1000000 \
--config_path models/gpt2/distil_config.json \
--output_model_path models/cluecorpussmall_gpt2_distil_seq1024_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_gpt2_distil_seq1024_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 6
```
For GPT2-xlarge model, we use TencetPretrain.
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_lm_seq128_dataset.pt \
--seq_length 128 --processes_num 32 --data_processor lm
```
```
deepspeed pretrain.py --deepspeed --deepspeed_config models/deepspeed_config.json \
--dataset_path corpora/cluecorpussmall_lm_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/gpt2/xlarge_config.json \
--output_model_path models/cluecorpussmall_gpt2_xlarge_seq128_model \
--world_size 8 --batch_size 64 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--deepspeed_checkpoint_activations --deepspeed_checkpoint_layers_num 24
```
Before stage2, we extract fp32 consolidated weights from a zero 2 and 3 DeepSpeed checkpoints:
```
python3 models/cluecorpussmall_gpt2_xlarge_seq128_model/zero_to_fp32.py models/cluecorpussmall_gpt2_xlarge_seq128_model/ \
models/cluecorpussmall_gpt2_xlarge_seq128_model.bin
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_lm_seq1024_dataset.pt \
--seq_length 1024 --processes_num 32 --data_processor lm
```
```
deepspeed pretrain.py --deepspeed --deepspeed_config models/deepspeed_config.json \
--dataset_path corpora/cluecorpussmall_lm_seq1024_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/gpt2/xlarge_config.json \
--pretrained_model_path models/cluecorpussmall_gpt2_xlarge_seq128_model.bin \
--output_model_path models/cluecorpussmall_gpt2_xlarge_seq1024_model \
--world_size 8 --batch_size 16 --learning_rate 5e-5 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--deepspeed_checkpoint_activations --deepspeed_checkpoint_layers_num 6
```
Then, we extract fp32 consolidated weights from a zero 2 and 3 DeepSpeed checkpoints:
```
python3 models/cluecorpussmall_gpt2_xlarge_seq1024_model/zero_to_fp32.py models/cluecorpussmall_gpt2_xlarge_seq1024_model/ \
models/cluecorpussmall_gpt2_xlarge_seq1024_model.bin
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_gpt2_from_tencentpretrain_to_huggingface.py --input_model_path models/cluecorpussmall_gpt2_xlarge_seq1024_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 48
```
### BibTeX entry and citation info
```
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
@article{zhao2023tencentpretrain,
title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
journal={ACL 2023},
pages={217},
year={2023}
```
[distil]:https://huggingface.co/uer/gpt2-distil-chinese-cluecorpussmall
[base]:https://huggingface.co/uer/gpt2-chinese-cluecorpussmall
[medium]:https://huggingface.co/uer/gpt2-medium-chinese-cluecorpussmall
[large]:https://huggingface.co/uer/gpt2-large-chinese-cluecorpussmall
[xlarge]:https://huggingface.co/uer/gpt2-xlarge-chinese-cluecorpussmall |
keehun/textual_inversion_mpchar-r100 | keehun | 2023-07-19T06:10:15Z | 436 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-19T04:43:57Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - keehun/textual_inversion_mpchar-r100
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
marcchew/flan-t5-3b-lamini-30k | marcchew | 2023-08-19T12:30:49Z | 436 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2023-08-19T11:52:31Z | Entry not found |
TheBloke/nsql-llama-2-7B-GGUF | TheBloke | 2023-11-18T21:55:48Z | 436 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:NumbersStation/nsql-llama-2-7B",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-11-18T21:36:59Z | ---
base_model: NumbersStation/nsql-llama-2-7B
inference: false
license: llama2
model_creator: NumbersStation
model_name: NSQL Llama-2 7B
model_type: llama
prompt_template: '{prompt}
SELECT
'
quantized_by: TheBloke
widget:
- example_title: Number stadiums
text: "CREATE TABLE stadium (\n stadium_id number,\n location text,\n name\
\ text,\n capacity number,\n)\n\n-- Using valid SQLite, answer the following\
\ questions for the tables provided above.\n\n-- how many stadiums in total?\n\
\nSELECT"
- example_title: Open work orders
text: 'CREATE TABLE work_orders ( ID NUMBER, CREATED_AT TEXT, COST FLOAT, INVOICE_AMOUNT
FLOAT, IS_DUE BOOLEAN, IS_OPEN BOOLEAN, IS_OVERDUE BOOLEAN, COUNTRY_NAME TEXT,
)
-- Using valid SQLite, answer the following questions for the tables provided
above.
-- how many work orders are open?
SELECT'
- example_title: Stadium capacity
text: 'CREATE TABLE stadium ( stadium_id number, location text, name text, capacity
number, highest number, lowest number, average number )
CREATE TABLE singer ( singer_id number, name text, country text, song_name text,
song_release_year text, age number, is_male others )
CREATE TABLE concert ( concert_id number, concert_name text, theme text, stadium_id
text, year text )
CREATE TABLE singer_in_concert ( concert_id number, singer_id text )
-- Using valid SQLite, answer the following questions for the tables provided
above.
-- What is the maximum, the average, and the minimum capacity of stadiums ?
SELECT'
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# NSQL Llama-2 7B - GGUF
- Model creator: [NumbersStation](https://huggingface.co/NumbersStation)
- Original model: [NSQL Llama-2 7B](https://huggingface.co/NumbersStation/nsql-llama-2-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [NumbersStation's NSQL Llama-2 7B](https://huggingface.co/NumbersStation/nsql-llama-2-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/nsql-llama-2-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/nsql-llama-2-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF)
* [NumbersStation's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NumbersStation/nsql-llama-2-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: nsql
```
{prompt}
SELECT
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [nsql-llama-2-7b.Q2_K.gguf](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF/blob/main/nsql-llama-2-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [nsql-llama-2-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF/blob/main/nsql-llama-2-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [nsql-llama-2-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF/blob/main/nsql-llama-2-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [nsql-llama-2-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF/blob/main/nsql-llama-2-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [nsql-llama-2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF/blob/main/nsql-llama-2-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [nsql-llama-2-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF/blob/main/nsql-llama-2-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [nsql-llama-2-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF/blob/main/nsql-llama-2-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [nsql-llama-2-7b.Q5_0.gguf](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF/blob/main/nsql-llama-2-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [nsql-llama-2-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF/blob/main/nsql-llama-2-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [nsql-llama-2-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF/blob/main/nsql-llama-2-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [nsql-llama-2-7b.Q6_K.gguf](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF/blob/main/nsql-llama-2-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [nsql-llama-2-7b.Q8_0.gguf](https://huggingface.co/TheBloke/nsql-llama-2-7B-GGUF/blob/main/nsql-llama-2-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/nsql-llama-2-7B-GGUF and below it, a specific filename to download, such as: nsql-llama-2-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/nsql-llama-2-7B-GGUF nsql-llama-2-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/nsql-llama-2-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/nsql-llama-2-7B-GGUF nsql-llama-2-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m nsql-llama-2-7b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}\n\nSELECT"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/nsql-llama-2-7B-GGUF", model_file="nsql-llama-2-7b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: NumbersStation's NSQL Llama-2 7B
# NSQL-Llama-2-7B
## Model Description
NSQL is a family of autoregressive open-source large foundation models (FMs) designed specifically for SQL generation tasks.
In this repository we are introducing a new member of NSQL, NSQL-Llama-2-7B. It's based on Meta's original [Llama-2 7B model](https://huggingface.co/meta-llama/Llama-2-7b) and further pre-trained on a dataset of general SQL queries and then fine-tuned on a dataset composed of text-to-SQL pairs.
## Training Data
The general SQL queries are the SQL subset from [The Stack](https://huggingface.co/datasets/bigcode/the-stack), containing 1M training samples. The labeled text-to-SQL pairs come from more than 20 public sources across the web from standard datasets. We hold out Spider and GeoQuery datasets for use in evaluation.
## Evaluation Data
We evaluate our models on two text-to-SQL benchmarks: Spider and GeoQuery.
## Training Procedure
NSQL was trained using cross-entropy loss to maximize the likelihood of sequential inputs. For finetuning on text-to-SQL pairs, we only compute the loss over the SQL portion of the pair. The model is trained using 80GB A100s, leveraging data and model parallelism. We pre-trained for 3 epochs and fine-tuned for 10 epochs.
## Intended Use and Limitations
The model was designed for text-to-SQL generation tasks from given table schema and natural language prompts. The model works best with the prompt format defined below and outputting `SELECT` queries.
## How to Use
Example 1:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("NumbersStation/nsql-llama-2-7B")
model = AutoModelForCausalLM.from_pretrained("NumbersStation/nsql-llama-2-7B", torch_dtype=torch.bfloat16)
text = """CREATE TABLE stadium (
stadium_id number,
location text,
name text,
capacity number,
highest number,
lowest number,
average number
)
CREATE TABLE singer (
singer_id number,
name text,
country text,
song_name text,
song_release_year text,
age number,
is_male others
)
CREATE TABLE concert (
concert_id number,
concert_name text,
theme text,
stadium_id text,
year text
)
CREATE TABLE singer_in_concert (
concert_id number,
singer_id text
)
-- Using valid SQLite, answer the following questions for the tables provided above.
-- What is the maximum, the average, and the minimum capacity of stadiums ?
SELECT"""
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=500)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
Example 2:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("NumbersStation/nsql-llama-2-7B")
model = AutoModelForCausalLM.from_pretrained("NumbersStation/nsql-llama-2-7B", torch_dtype=torch.bfloat16)
text = """CREATE TABLE stadium (
stadium_id number,
location text,
name text,
capacity number,
)
-- Using valid SQLite, answer the following questions for the tables provided above.
-- how many stadiums in total?
SELECT"""
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=500)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
Example 3:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("NumbersStation/nsql-llama-2-7B")
model = AutoModelForCausalLM.from_pretrained("NumbersStation/nsql-llama-2-7B", torch_dtype=torch.bfloat16)
text = """CREATE TABLE work_orders (
ID NUMBER,
CREATED_AT TEXT,
COST FLOAT,
INVOICE_AMOUNT FLOAT,
IS_DUE BOOLEAN,
IS_OPEN BOOLEAN,
IS_OVERDUE BOOLEAN,
COUNTRY_NAME TEXT,
)
-- Using valid SQLite, answer the following questions for the tables provided above.
-- how many work orders are open?
SELECT"""
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=500)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
For more information (e.g., run with your local database), please find examples in [this repository](https://github.com/NumbersStationAI/NSQL).
<!-- original-model-card end -->
|
llmixer/BigWeave-v18-108b | llmixer | 2024-02-15T17:44:49Z | 436 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"frankenmerge",
"108b",
"conversational",
"en",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-15T16:32:24Z | ---
license: unknown
language:
- en
pipeline_tag: conversational
tags:
- frankenmerge
- 108b
---
# BigWeave v18 108b
<img src="https://cdn-uploads.huggingface.co/production/uploads/65a6db055c58475cf9e6def1/4CbbAN-X7ZWj702JrcCGH.png" width=600>
The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.
# Prompting Format
Mistral, Vicuna and Alpaca.
# Merge process
This is a self-merge of 152334H/miqu-1-70b-sf. By conducting exl2 measurements, we identify the most relevant layers. The most important layers are extended with layers in-between to create longer series of consecutive layers.
Merge configuration:
```
slices:
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [0,5]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [1,9]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [5,33]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [16,51]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [34,77]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [75,79]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [77,80]
merge_method: passthrough
dtype: float16
``` |
predibase/conllpp | predibase | 2024-02-21T19:13:37Z | 436 | 4 | peft | [
"peft",
"safetensors",
"text-generation",
"base_model:mistralai/Mistral-7B-v0.1",
"region:us"
]
| text-generation | 2024-02-19T23:16:53Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
---
Description: Named entity recognition\
Original dataset: https://huggingface.co/datasets/conllpp \
---\
Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \
The adapter_category is Named Entity Recognition and the name is Named Entity Recognition (CoNLL++)\
---\
Sample input: Your task is a Named Entity Recognition (NER) task. Predict the category of each entity, then place the entity into the list associated with the category in an output JSON payload. Below is an example:\nInput: EU rejects German call to boycott British lamb . Output: {"person": [], "organization": ["EU"], "location": [], "miscellaneous": ["German", "British"]}\nNow, complete the task.\nInput: By the close Yorkshire had turned that into a 37-run advantage but off-spinner Such had scuttled their hopes , taking four for 24 in 48 balls and leaving them hanging on 119 for five and praying for rain . Output: \
---\
Sample output: {"person": ["Such"], "organization": ["Yorkshire"], "location": [], "miscellaneous": []}\
---\
Try using this adapter yourself!
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-7B-v0.1"
peft_model_id = "predibase/conllpp"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
``` |
NilanE/gamma-translation-gguf | NilanE | 2024-02-21T02:07:28Z | 436 | 1 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:stabilityai/japanese-stablelm-base-gamma-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-21T02:07:18Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: stabilityai/japanese-stablelm-base-gamma-7b
---
# Uploaded model
- **Developed by:** NilanE
- **License:** apache-2.0
- **Finetuned from model :** stabilityai/japanese-stablelm-base-gamma-7b
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/MXLewd-L2-20B-i1-GGUF | mradermacher | 2024-05-06T05:50:04Z | 436 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Undi95/MXLewd-L2-20B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-28T06:12:28Z | ---
base_model: Undi95/MXLewd-L2-20B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
weighted/imatrix quants of https://huggingface.co/Undi95/MXLewd-L2-20B
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ2_S.gguf) | i1-IQ2_S | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ2_M.gguf) | i1-IQ2_M | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q2_K.gguf) | i1-Q2_K | 7.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ3_S.gguf) | i1-IQ3_S | 9.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ3_M.gguf) | i1-IQ3_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 11.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q4_0.gguf) | i1-Q4_0 | 11.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 14.1 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/MXLewd-L2-20B-i1-GGUF/resolve/main/MXLewd-L2-20B.i1-Q6_K.gguf) | i1-Q6_K | 16.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
duyntnet/Chaos_RP_l3_8B-imatrix-GGUF | duyntnet | 2024-04-26T07:04:57Z | 436 | 1 | transformers | [
"transformers",
"gguf",
"imatrix",
"jeiku",
"Chaos_RP_l3_8B",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-04-25T11:52:12Z | ---
license: other
inference: false
language:
- en
pipeline_tag: text-generation
tags:
- transformers
- gguf
- imatrix
- jeiku
- Chaos_RP_l3_8B
---
Quantizations of https://huggingface.co/jeiku/Chaos_RP_l3_8B
# From original readme
... |
mradermacher/Mixtral_AI_Chat_1.0-GGUF | mradermacher | 2024-05-05T14:46:13Z | 436 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:LeroyDyer/Mixtral_AI_Chat_1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-03T02:49:46Z | ---
base_model: LeroyDyer/Mixtral_AI_Chat_1.0
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_Chat_1.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_Chat_1.0-GGUF/resolve/main/Mixtral_AI_Chat_1.0.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kitopang/llama3_generative_qa_2 | kitopang | 2024-05-06T01:52:11Z | 436 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-06T01:29:41Z | ---
library_name: transformers
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
guoyww/animatediff-motion-adapter-sdxl-beta | guoyww | 2024-05-07T16:13:57Z | 436 | 2 | diffusers | [
"diffusers",
"safetensors",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-07T16:10:48Z | ---
license: apache-2.0
library_name: diffusers
---
This checkpoint was converted to Diffusers format by [a-r-r-o-w](https://github.com/a-r-r-o-w/). You can find results and more details adding AnimateDiff SDXL support (beta) to 🤗 Diffusers [here](https://github.com/huggingface/diffusers/pull/6721) The following description is copied from [here](https://huggingface.co/guoyww/animatediff-motion-adapter-v1-5-2).
AnimateDiff is a method that allows you to create videos using pre-existing Stable Diffusion Text to Image models.
It achieves this by inserting motion module layers into a frozen text to image model and training it on video clips to extract a motion prior. These motion modules are applied after the ResNet and Attention blocks in the Stable Diffusion UNet. Their purpose is to introduce coherent motion across image frames. To support these modules we introduce the concepts of a MotionAdapter and UNetMotionModel. These serve as a convenient way to use these motion modules with existing Stable Diffusion models.
Note: The SDXL checkpoint for AnimateDiff is a beta version.
### Usage
```python
import torch
from diffusers import AnimateDiffSDXLPipeline
from diffusers.schedulers import DDIMScheduler, EulerDiscreteScheduler, DEISMultistepScheduler
from diffusers.models import MotionAdapter
from diffusers.utils import export_to_gif
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-sdxl-beta", torch_dtype=torch.float16)
scheduler = DDIMScheduler.from_pretrained(
model_id,
subfolder="scheduler",
clip_sample=False,
timestep_spacing="linspace",
beta_schedule="linear",
steps_offset=1,
)
pipe = AnimateDiffSDXLPipeline.from_pretrained(
model_id,
motion_adapter=adapter,
scheduler=scheduler,
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
# enable memory savings
pipe.enable_vae_slicing()
pipe.enable_vae_tiling()
result = pipe(
prompt="a panda surfing in the ocean, realistic, hyperrealism, high quality",
negative_prompt="low quality, worst quality",
num_inference_steps=20,
guidance_scale=8,
width=1024,
height=1024,
num_frames=16,
)
export_to_gif(result.frames[0], "animation.gif")
```
|
lodrick-the-lafted/Limon-8B | lodrick-the-lafted | 2024-05-11T16:31:53Z | 436 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-11T13:11:05Z | ---
license: apache-2.0
---
<img src=https://huggingface.co/lodrick-the-lafted/Limon-8B/resolve/main/limon.png>
Limon-8B
This one is [lodrick-the-lafted/Olethros-8B](https://huggingface.co/lodrick-the-lafted/Olethros-8B) with ablations where the harmless dataset was LimaRP and the harmful dataset was tatsu-lab/alpaca. |
duyntnet/Vistral-7B-Chat-DPO-imatrix-GGUF | duyntnet | 2024-05-14T13:39:22Z | 436 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Vistral-7B-Chat-DPO",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-05-14T11:01:19Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Vistral-7B-Chat-DPO
---
Quantizations of https://huggingface.co/jan-hq/Vistral-7B-Chat-DPO
# From original readme
## Prompt template
Mistral
```
[INST] <<SYS>>
{system_message}
<</SYS>>
{prompt} [/INST]
``` |
RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf | RichardErkhov | 2024-05-21T16:21:38Z | 436 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-21T13:28:19Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT - GGUF
- Model creator: https://huggingface.co/Remek/
- Original model: https://huggingface.co/Remek/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q2_K.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q2_K.gguf) | Q2_K | 2.96GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q3_K.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q3_K.gguf) | Q3_K | 3.74GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q4_0.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q4_K.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q4_K.gguf) | Q4_K | 4.58GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q4_1.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q5_0.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q5_K.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q5_K.gguf) | Q5_K | 5.34GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q5_1.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q6_K.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q6_K.gguf) | Q6_K | 6.14GB |
| [Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q8_0.gguf](https://huggingface.co/RichardErkhov/Remek_-_Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT-gguf/blob/main/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
language:
- pl
- en
pipeline_tag: text-generation
license: cc-by-4.0
---
## Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT
Repozytorium zawiera model Meta Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT w wersji polskojęzycznej. Jest to model INSTRUCT (instrukcyjny). Model postał na podstawie finetuningu modelu bazowego Llama-3-8B. Wykorzystano do tego dataset instrukcji Omnibus-1-PL (stworzyłem go na własne potrzeby przeprowadzania eksperymenów finetuningu modeli w języku polskim). Szczegóły parametrów treningu w sekcji Trening. Celem tego eksperymentu było sprawdzenie czy można namówić Llama-3-8B do płynnego rozmawiania w języku polskim (oryginalny model instrukcyjny 8B ma z tym problem - woli zdecydowanie bardziej rozmawiać po angielsku).
<img src="Llama-3-8B-PL-small.jpg" width="420" />
Uwaga!
* Model NIE jest CENZUROWANY. To wersja do zabawy. Nie została ujarzmiona.
* Model będzie dalej rozwijany ponieważ eksperymentuję z a. kolejnymi wersjami datasetu, b. model jest świetną bazą do testowania różnych technik finetunowania (LoRA, QLoRA; DPO, ORPO itd.)
* Udostępniłem go spontanicznie by użytkownicy mogli go używać i sprawdzać jakość Llama 3 ale w kontekście języka polskiego.
* Po informacji, że baza była trenowana na 15T tokenów (tylko 5% nie angielskich) uznałem, że to świetny model do finetuningu. Być może lekkie dotrenowanie modelu za pomocą contingued-pretraining da jeszcze większy uzysk.
### Sposób kodowania nazwy modelu
* Nazwa modelu bazowego: Llama-3-8B
* Nazwa datasetu: Omnibus-1
* Wersja językowa: PL (polska)
* Wersja modelu: v01
### Dataset
Omnibus-1 to zbiór polskich instrukcji (100% kontekstu Polskiego - fakty, osoby, miejsca osadzone w Polsce), który został w 100% syntetycznie wygenerowany. Zawiera on instrukcje z kategorii - matematyka, umiejętność pisania, dialogi, tematy medyczne, zagadki logiczne, tłumaczenia itd. Powstał on w ramach moich prac związanych z badaniem jakości modeli w kontekście języka polskiego. Pozwala on na finetuning modelu i sprawdzenie podatności modelu do mówienia w naszym rodzimym języku. Dataset zawiera obecnie 75.000 instrukcji. Nie jest on idealny i wymaga jeszcze dużo nakładu pracy by model pracował w przewidywalnie ale będzie cały czas udoskonalony i być może w przyszłości udostępniony (jak uznam, że już jest wtstarczająco pełen i obejmuje szerokie spektrum tematyki i umiejętności). Dataset jest w 100% generowany za pomocą innych LLM.
### Użycie Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT z biblioteką Transformers
Poniższy kod używa bibliotekę Transformers:
```python
import transformers
import torch
model_id = "Remek/Llama-3-8B-Omnibus-1-PL-v01-INSTRUCT"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability."},
{"role": "user", "content": "Napisz czym jest Warszawa?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
Przykładowa odpowiedź:
> Warszawa to stolica Polski, położona na terenie Mazowsza. Jest to największe miasto w Polsce pod względem liczby mieszkańców oraz powierzchni. Warszawa jest również siedzibą rządu, parlamentu, prezydenta, sądu najwyższego oraz wielu innych instytucji państwowych i międzynarodowych. Miasto jest znane z wielu zabytków, takich jak Stare Miasto, Zamek Królewski, Pałac Kultury i Nauki, a także z nowoczesnych atrakcji, takich jak Centrum Nauki Kopernik czy Centrum Nauki i Sztuki. Warszawa jest również ważnym centrum kulturalnym, gospodarczym i naukowym. Miasto jest również siedzibą wielu firm międzynarodowych i polskich przedsiębiorstw. Warszawa jest miastem, które łączy w sobie przeszłość z nowoczesnością, oferując mieszkańcom i turystom szeroki zakres możliwości. Miasto jest również znane z wielu festiwali i wydarzeń kulturalnych, które przyciągają miliony turystów z całego świata. Warszawa to miasto pełne życia, kultury, historii i nowoczesności, które zdecydowanie zasługuje na uwagę. <|im_end|>
### Szablon konwersacji
Szablon konwersacji to oryginalna wersja Llama3
```
<|start_header_id|>system<|end_header_id|>
You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
{User}
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{Assistant}
```
### Wersje quantized
Wersje poddane quantyzacji są dostępne w repozytorium:
* Llama-3-8B-Omnibus-1-PL-v01-instruct-GGUF - przetestowane w LM Studio (wybierz szablon - Llama3) oraz ollama
*
| Version | Model card |
| ------- | -------------------------------------------------------------------------- |
| Instruct| [🤗 HuggingFace](https://huggingface.co/Remek/Llama-3-8B-Omnibus-1-PL-v01-instruct-GGUF) |
### Trening
Poniżej szczegóły hiperparametrów treningu:
* learning_rate: 2e-05
* train_batch_size: 8
* eval_batch_size: 8
* seed: 42
* distributed_type: single-GPU (Nvidia A6000 Ada)
* num_devices: 1
* gradient_accumulation_steps: 4
* optimizer: adamw_8bit
* lr_scheduler_type: linear
* lr_scheduler_warmup_steps: 5
* num_epochs: 1
* QLoRa - 4bit: rank 64, alpha 128
#### Unsloth
<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made with unsloth.png" width="200px" align="center" />
[Unsloth](https://unsloth.ai), narzędzie dzięki któremu powstał ten model.
### Licencja
Licencja na zasadzie nie do komercyjnego użycia (ze względu na dataset - generowany syntetycznie za pomocą modeli GPT4, GPT3.5) oraz licencja Llama3 (proszę o zapoznanie się ze szczegółami licencji).
|
deepseek-ai/DeepSeek-Coder-V2-Base | deepseek-ai | 2024-06-24T12:05:02Z | 436 | 30 | transformers | [
"transformers",
"safetensors",
"deepseek_v2",
"text-generation",
"conversational",
"custom_code",
"arxiv:2401.06066",
"license:other",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-06-14T03:45:54Z | ---
license: other
license_name: deepseek-license
license_link: LICENSE
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V2-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="#4-api-platform">API Platform</a> |
<a href="#5-how-to-run-locally">How to Use</a> |
<a href="#6-license">License</a> |
</p>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/paper.pdf"><b>Paper Link</b>👁️</a>
</p>
# DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
## 1. Introduction
We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.
<p align="center">
<img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/performance.png?raw=true">
</p>
In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks. The list of supported programming languages can be found [here](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/supported_langs.txt).
## 2. Model Downloads
We release the DeepSeek-Coder-V2 with 16B and 236B parameters based on the [DeepSeekMoE](https://arxiv.org/pdf/2401.06066) framework, which has actived parameters of only 2.4B and 21B , including base and instruct models, to the public.
<div align="center">
| **Model** | **#Total Params** | **#Active Params** | **Context Length** | **Download** |
| :-----------------------------: | :---------------: | :----------------: | :----------------: | :----------------------------------------------------------: |
| DeepSeek-Coder-V2-Lite-Base | 16B | 2.4B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) |
| DeepSeek-Coder-V2-Lite-Instruct | 16B | 2.4B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) |
| DeepSeek-Coder-V2-Base | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Base) |
| DeepSeek-Coder-V2-Instruct | 236B | 21B | 128k | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct) |
</div>
## 3. Chat Website
You can chat with the DeepSeek-Coder-V2 on DeepSeek's official website: [coder.deepseek.com](https://coder.deepseek.com/sign_in)
## 4. API Platform
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/), and you can also pay-as-you-go at an unbeatable price.
<p align="center">
<img width="40%" src="https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/figures/model_price.jpg?raw=true">
</p>
## 5. How to run locally
**Here, we provide some examples of how to use DeepSeek-Coder-V2-Lite model. If you want to utilize DeepSeek-Coder-V2 in BF16 format for inference, 80GB*8 GPUs are required.**
### Inference with Huggingface's Transformers
You can directly employ [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference.
#### Code Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
input_text = "#write a quick sort algorithm"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
#### Code Insertion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Base", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
input_text = """<|fim▁begin|>def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
left = []
right = []
<|fim▁hole|>
if arr[i] < pivot:
left.append(arr[i])
else:
right.append(arr[i])
return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>"""
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):])
```
#### Chat Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
messages=[
{ 'role': 'user', 'content': "write a quick sort algorithm in python."}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
# tokenizer.eos_token_id is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
The complete chat template can be found within `tokenizer_config.json` located in the huggingface model repository.
An example of chat template is as belows:
```bash
<|begin▁of▁sentence|>User: {user_message_1}
Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}
Assistant:
```
You can also add an optional system message:
```bash
<|begin▁of▁sentence|>{system_message}
User: {user_message_1}
Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}
Assistant:
```
### Inference with vLLM (recommended)
To utilize [vLLM](https://github.com/vllm-project/vllm) for model inference, please merge this Pull Request into your vLLM codebase: https://github.com/vllm-project/vllm/pull/4650.
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
max_model_len, tp_size = 8192, 1
model_name = "deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True)
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
messages_list = [
[{"role": "user", "content": "Who are you?"}],
[{"role": "user", "content": "write a quick sort algorithm in python."}],
[{"role": "user", "content": "Write a piece of quicksort code in C++."}],
]
prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)
```
## 6. License
This code repository is licensed under [the MIT License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-CODE). The use of DeepSeek-Coder-V2 Base/Instruct models is subject to [the Model License](https://github.com/deepseek-ai/DeepSeek-Coder-V2/blob/main/LICENSE-MODEL). DeepSeek-Coder-V2 series (including Base and Instruct) supports commercial use.
## 7. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
NikolayKozloff/Viking-7B-Q4_0-GGUF | NikolayKozloff | 2024-06-29T19:12:47Z | 436 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"fi",
"en",
"da",
"sv",
"no",
"nn",
"is",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:mc4",
"base_model:LumiOpen/Viking-7B",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-29T19:12:28Z | ---
base_model: LumiOpen/Viking-7B
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- mc4
language:
- fi
- en
- da
- sv
- 'no'
- nn
- is
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Viking-7B-Q4_0-GGUF
This model was converted to GGUF format from [`LumiOpen/Viking-7B`](https://huggingface.co/LumiOpen/Viking-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Viking-7B-Q4_0-GGUF --hf-file viking-7b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Viking-7B-Q4_0-GGUF --hf-file viking-7b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Viking-7B-Q4_0-GGUF --hf-file viking-7b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Viking-7B-Q4_0-GGUF --hf-file viking-7b-q4_0.gguf -c 2048
```
|
s-nlp/xlmr_formality_classifier | s-nlp | 2024-06-03T08:12:36Z | 435 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"formal or informal classification",
"en",
"fr",
"it",
"pt",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
- fr
- it
- pt
tags:
- formal or informal classification
licenses:
- cc-by-nc-sa
license: cc-by-nc-sa-4.0
---
XLMRoberta-based classifier trained on XFORMAL.
all
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.744912 | 0.927790 | 0.826354 | 108019 |
| 1 | 0.889088 | 0.645630 | 0.748048 | 96845 |
| accuracy | | | 0.794405 | 204864 |
| macro avg | 0.817000 | 0.786710 | 0.787201 | 204864 |
| weighted avg | 0.813068 | 0.794405 | 0.789337 | 204864 |
en
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.800053 | 0.962981 | 0.873988 | 22151 |
| 1 | 0.945106 | 0.725899 | 0.821124 | 19449 |
| accuracy | | | 0.852139 | 41600 |
| macro avg | 0.872579 | 0.844440 | 0.847556 | 41600 |
| weighted avg | 0.867869 | 0.852139 | 0.849273 | 41600 |
fr
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.746709 | 0.925738 | 0.826641 | 21505 |
| 1 | 0.887305 | 0.650592 | 0.750731 | 19327 |
| accuracy | | | 0.795504 | 40832 |
| macro avg | 0.817007 | 0.788165 | 0.788686 | 40832 |
| weighted avg | 0.813257 | 0.795504 | 0.790711 | 40832 |
it
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.721282 | 0.914669 | 0.806545 | 21528 |
| 1 | 0.864887 | 0.607135 | 0.713445 | 19368 |
| accuracy | | | 0.769024 | 40896 |
| macro avg | 0.793084 | 0.760902 | 0.759995 | 40896 |
| weighted avg | 0.789292 | 0.769024 | 0.762454 | 40896 |
pt
| | precision | recall | f1-score | support |
|--------------|-----------|----------|----------|---------|
| 0 | 0.717546 | 0.908167 | 0.801681 | 21637 |
| 1 | 0.853628 | 0.599700 | 0.704481 | 19323 |
| accuracy | | | 0.762646 | 40960 |
| macro avg | 0.785587 | 0.753933 | 0.753081 | 40960 |
| weighted avg | 0.781743 | 0.762646 | 0.755826 | 40960 |
## How to use
```python
from transformers import XLMRobertaTokenizerFast, XLMRobertaForSequenceClassification
# load tokenizer and model weights
tokenizer = XLMRobertaTokenizerFast.from_pretrained('s-nlp/xlmr_formality_classifier')
model = XLMRobertaForSequenceClassification.from_pretrained('s-nlp/xlmr_formality_classifier')
id2formality = {0: "formal", 1: "informal"}
texts = [
"I like you. I love you",
"Hey, what's up?",
"Siema, co porabiasz?",
"I feel deep regret and sadness about the situation in international politics.",
]
# prepare the input
encoding = tokenizer(
texts,
add_special_tokens=True,
return_token_type_ids=True,
truncation=True,
padding="max_length",
return_tensors="pt",
)
# inference
output = model(**encoding)
formality_scores = [
{id2formality[idx]: score for idx, score in enumerate(text_scores.tolist())}
for text_scores in output.logits.softmax(dim=1)
]
formality_scores
```
```
[{'formal': 0.993225634098053, 'informal': 0.006774314679205418},
{'formal': 0.8807966113090515, 'informal': 0.1192033663392067},
{'formal': 0.936184287071228, 'informal': 0.06381577253341675},
{'formal': 0.9986615180969238, 'informal': 0.0013385231141000986}]
```
## Citation
```
@inproceedings{dementieva-etal-2023-detecting,
title = "Detecting Text Formality: A Study of Text Classification Approaches",
author = "Dementieva, Daryna and
Babakov, Nikolay and
Panchenko, Alexander",
editor = "Mitkov, Ruslan and
Angelova, Galia",
booktitle = "Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing",
month = sep,
year = "2023",
address = "Varna, Bulgaria",
publisher = "INCOMA Ltd., Shoumen, Bulgaria",
url = "https://aclanthology.org/2023.ranlp-1.31",
pages = "274--284",
abstract = "Formality is one of the important characteristics of text documents. The automatic detection of the formality level of a text is potentially beneficial for various natural language processing tasks. Before, two large-scale datasets were introduced for multiple languages featuring formality annotation{---}GYAFC and X-FORMAL. However, they were primarily used for the training of style transfer models. At the same time, the detection of text formality on its own may also be a useful application. This work proposes the first to our knowledge systematic study of formality detection methods based on statistical, neural-based, and Transformer-based machine learning methods and delivers the best-performing models for public usage. We conducted three types of experiments {--} monolingual, multilingual, and cross-lingual. The study shows the overcome of Char BiLSTM model over Transformer-based ones for the monolingual and multilingual formality classification task, while Transformer-based classifiers are more stable to cross-lingual knowledge transfer.",
}
```
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png |
jb2k/bert-base-multilingual-cased-language-detection | jb2k | 2021-11-24T01:36:01Z | 435 | 13 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | # bert-base-multilingual-cased-language-detection
A model for language detection with support for 45 languages
## Model description
This model was created by fine-tuning
[bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the [common language](https://huggingface.co/datasets/common_language) dataset.
This dataset has support for 45 languages, which are listed below:
```
Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh
```
## Evaluation
This model was evaluated on the test split of the [common language](https://huggingface.co/datasets/common_language) dataset, and achieved the following metrics:
* Accuracy: 97.8%
|
unicamp-dl/ptt5-base-t5-vocab | unicamp-dl | 2024-04-10T17:39:41Z | 435 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"t5",
"text2text-generation",
"tensorflow",
"pt",
"pt-br",
"dataset:brWaC",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em português"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
|
TweebankNLP/bertweet-tb2_wnut17-ner | TweebankNLP | 2022-05-05T00:23:17Z | 435 | 4 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"arxiv:2201.07281",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-05-04T16:50:37Z | ---
license: cc-by-nc-4.0
---
## Model Specification
- This is the **state-of-the-art Twitter NER model (with 74.35\% Entity-Level F1)** on Tweebank V2's NER benchmark (also called `Tweebank-NER`), trained on the corpus combining both Tweebank-NER and WNUT 17 training data.
- For more details about the `TweebankNLP` project, please refer to this [our paper](https://arxiv.org/pdf/2201.07281.pdf) and [github](https://github.com/social-machines/TweebankNLP) page.
- In the paper, it is referred as `HuggingFace-BERTweet (TB2+W17).`
## How to use the model
- **PRE-PROCESSING**: when you apply the model on tweets, please make sure that tweets are preprocessed by the [TweetTokenizer](https://github.com/VinAIResearch/BERTweet/blob/master/TweetNormalizer.py) to get the best performance.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("TweebankNLP/bertweet-tb2_wnut17-ner")
model = AutoModelForTokenClassification.from_pretrained("TweebankNLP/bertweet-tb2_wnut17-ner")
```
## References
If you use this repository in your research, please kindly cite [our paper](https://arxiv.org/pdf/2201.07281.pdf):
```bibtex
@article{jiang2022tweetnlp,
title={Annotating the Tweebank Corpus on Named Entity Recognition and Building NLP Models for Social Media Analysis},
author={Jiang, Hang and Hua, Yining and Beeferman, Doug and Roy, Deb},
journal={In Proceedings of the 13th Language Resources and Evaluation Conference (LREC)},
year={2022}
}
``` |
Intel/ldm3d-4c | Intel | 2024-03-01T11:04:24Z | 435 | 20 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"text-to-3d",
"en",
"arxiv:2305.10853",
"arxiv:2112.10752",
"arxiv:2103.13413",
"license:creativeml-openrail-m",
"model-index",
"diffusers:StableDiffusionLDM3DPipeline",
"region:us"
]
| text-to-3d | 2023-06-22T12:02:01Z | ---
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
model-index:
- name: ldm3d
results:
- task:
name: Latent Diffusion Model for 3D-4C
type: latent-diffusion-model-for-3D-4C
dataset:
name: LAION-400M
type: laion/laion400m
metrics:
- name: FID
type: FID
value: 27.82
- name: IS
type: IS
value: 28.79
- name: CLIP
type: CLIP
value: 26.61
- name: AbsRel
type: AbsRel
value: 0.0911
- name: RMSE [m]
type: RMSE-m
value: 0.334
pipeline_tag: text-to-3d
license: creativeml-openrail-m
---
# LDM3D-4C model
The LDM3D model was proposed in the paper [LDM3D: Latent Diffusion Model for 3D](https://arxiv.org/abs/2305.10853.pdf), authored by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal.
LDM3D was accepted to the [IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR)](https://cvpr2023.thecvf.com/Conferences/2023) in 2023.
This new checkpoint uses the depth as one channel compared to the previous version.
## Model details
The abstract from the paper is the following:
This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the img2img pipeline to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences.

<font size="2">LDM3D overview taken from the [LDM3D paper](https://arxiv.org/abs/2305.10853).</font>
## Usage
You can use this model to generate an RGB image and depth map given a text prompt.
A short video summarizing the approach can be found at [this url](https://t.ly/tdi2) and a VR demo can be found [here](https://www.youtube.com/watch?v=3hbUo-hwAs0).
A demo is also accessible on [Spaces](https://huggingface.co/spaces/Intel/ldm3d).
Here is how to use this model to get the features of a given text in PyTorch on both a CPU and GPU architecture:
```python
from diffusers import StableDiffusionLDM3DPipeline
pipe = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c")
# On CPU
pipe.to("cpu")
# On GPU
pipe.to("cuda")
prompt = "A picture of some lemons on a table"
name = "lemons"
output = pipe(prompt)
rgb_image, depth_image = output.rgb, output.depth
rgb_image[0].save(name+"_ldm3d_4c_rgb.jpg")
depth_image[0].save(name+"_ldm3d_4c_depth.png")
```
This is the result:

## Training data
The LDM3D model was finetuned on a dataset constructed from a subset of the LAION-400M dataset, a large-scale image-caption dataset that contains over 400 million image-caption pairs.
### Finetuning
The fine-tuning process comprises two stages. In the first stage, we train an autoencoder to generate a lower-dimensional, perceptually equivalent data representation. Subsequently, we fine-tune the diffusion model using the frozen autoencoder.
## Evaluation results
### Quantitative results
The table below shows the quantitative results of text-conditional image synthesis on the 512 x 512-sized MS-COCO dataset with 50 DDIM steps.
|Method |FID ↓|IS ↑ |CLIP ↑ |
|------------|-----|------------|------------|
|SD v1.4 |28.08|34.17 ± 0.76|26.13 ± 2.81|
|SD v1.5 |27.39|34.02 ± 0.79|26.13 ± 2.79|
|LDM3D (ours)|27.82|28.79 ± 0.49|26.61 ± 2.92|
Our model is on par with the Stable Diffusion models with the same number of parameters (1.06B). IS and CLIP similarity scores are averaged over 30k captions from the MS-COCO dataset.
The following table shows the evaluation results of depth evaluation comparing LDM3D and DPT-Large with respect to ZoeDepth-N that serves as a reference model.
|Method |AbsRel|RMSE [m]|
|---------|------|--------|
|LDM3D |0.0911|0.334 |
|DPT-Large|0.0779|0.297 |
The results shown above can be referenced in Table 1 and Table 2 of the [LDM3D paper](https://arxiv.org/abs/2305.10853.pdf).
### Qualitative results
The figure below shows some qualitative results comparing our method with [Stable Diffusion v1.4](https://arxiv.org/pdf/2112.10752.pdf) and with [DPT-Large](https://arxiv.org/pdf/2103.13413.pdf) for the depth maps .
## Ethical Considerations and Limitations
For image generation, the [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion-v1-4#limitations) limitations and biases apply. For depth map generation, a first limitiation is that we are using DPT-large to produce the ground truth, hence, other limitations and biases from [DPT](https://huggingface.co/Intel/dpt-large) are applicable.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
* [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch)
* [Intel Neural Compressor](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
### BibTeX entry and citation info
```bibtex
@misc{stan2023ldm3d,
title={LDM3D: Latent Diffusion Model for 3D},
author={Gabriela Ben Melech Stan and Diana Wofk and Scottie Fox and Alex Redden and Will Saxton and Jean Yu and Estelle Aflalo and Shao-Yen Tseng and Fabio Nonato and Matthias Muller and Vasudev Lal},
year={2023},
eprint={2305.10853},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
Yntec/DucHaiten-GoldenLife | Yntec | 2023-09-10T01:23:59Z | 435 | 1 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"3D",
"Pixar",
"DucHaiten",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-09T18:03:53Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- 3D
- Pixar
- DucHaiten
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# DucHaiten Golden Life
Support the creator at https://linktr.ee/Duc_Haiten
Sample and prompt:

A pretty cute mischievous girl with a cherubic face, 1940, Magazine ad, iconic, Pizza, veggies, little, blue eyes with face by alphonse mucha. guitar, extremely highly detailed, occult, funny, humorous, humor, hilarious, funny, entertaining, magical, trending on artstationhq, d&d, intricate
Original page:
https://tensor.art/models/628276277415133426
|
TheBloke/Airoboros-c34B-2.2-GGUF | TheBloke | 2023-09-27T12:49:30Z | 435 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:jondurbin/airoboros-2.2",
"base_model:jondurbin/airoboros-c34b-2.2",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-16T10:52:00Z | ---
license: llama2
datasets:
- jondurbin/airoboros-2.2
model_name: Airoboros c34B 2.2
base_model: jondurbin/airoboros-c34b-2.2
inference: false
model_creator: Jon Durbin
model_type: llama
prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros c34B 2.2 - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros c34B 2.2](https://huggingface.co/jondurbin/airoboros-c34b-2.2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jon Durbin's Airoboros c34B 2.2](https://huggingface.co/jondurbin/airoboros-c34b-2.2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-c34b-2.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Chat
```
A chat.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [airoboros-c34b-2.2.Q2_K.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF/blob/main/airoboros-c34b-2.2.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-c34b-2.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF/blob/main/airoboros-c34b-2.2.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss |
| [airoboros-c34b-2.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF/blob/main/airoboros-c34b-2.2.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss |
| [airoboros-c34b-2.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF/blob/main/airoboros-c34b-2.2.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss |
| [airoboros-c34b-2.2.Q4_0.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF/blob/main/airoboros-c34b-2.2.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-c34b-2.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF/blob/main/airoboros-c34b-2.2.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss |
| [airoboros-c34b-2.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF/blob/main/airoboros-c34b-2.2.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended |
| [airoboros-c34b-2.2.Q5_0.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF/blob/main/airoboros-c34b-2.2.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-c34b-2.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF/blob/main/airoboros-c34b-2.2.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
| [airoboros-c34b-2.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF/blob/main/airoboros-c34b-2.2.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended |
| [airoboros-c34b-2.2.Q6_K.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF/blob/main/airoboros-c34b-2.2.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss |
| [airoboros-c34b-2.2.Q8_0.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.2-GGUF/blob/main/airoboros-c34b-2.2.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Airoboros-c34B-2.2-GGUF and below it, a specific filename to download, such as: airoboros-c34b-2.2.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Airoboros-c34B-2.2-GGUF airoboros-c34b-2.2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Airoboros-c34B-2.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Airoboros-c34B-2.2-GGUF airoboros-c34b-2.2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m airoboros-c34b-2.2.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat.\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Airoboros-c34B-2.2-GGUF", model_file="airoboros-c34b-2.2.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Airoboros c34B 2.2
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
Highlights:
- The prompt format has changed! It is now newlines instead of spaces between system/USER/ASSISTANT (see prompt info below).
- "Clean" version of airoboros-2.2 dataset -- this model __does not__ contain the de-alignment data.
- For an uncensored version, use spicyboros variant: https://hf.co/jondurbin/spicyboros-c34b-2.2
- I re-generated all of the outputs in the dataset that had "Once upon a time" so they'd be less cliche - no guarantees that won't still happen, but in theory it may happen less.
- More multiple choice, better awareness, some alignment for normal use case but system-prompt overridable etc.
Breakdown of the training data:
| Count | Category |
|-------|----------------------------|
| 36 | experience |
| 60 | quiz |
| 63 | card |
| 76 | greeting |
| 100 | detailed\_writing |
| 200 | song |
| 204 | editor |
| 207 | counterfactual\_contextual |
| 268 | cot |
| 339 | theory\_of\_mind |
| 416 | awareness |
| 439 | stylized\_response |
| 457 | misconception |
| 500 | summarization |
| 620 | riddle |
| 719 | agent |
| 800 | plan |
| 873 | gtkm |
| 963 | rp |
| 1000 | wordgame |
| 1279 | multiple\_choice |
| 1519 | joke |
| 1758 | writing |
| 2152 | contextual |
| 2183 | trivia |
| 2364 | roleplay |
| 4699 | general |
| 5775 | coding |
| 11366 | orca |
In other words, it's a fairly general purpose model, but focuses fairly heavily on instruction response pairs rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format:
```
A chat.
USER: {prompt}
ASSISTANT:
```
The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc.
Here's another example:
```
A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously.
USER: {prompt}
ASSISTANT:
```
And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf).
```
A chat between old friends: Timmy and Tommy.
{description of characters}
{setting for the chat}
Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee."
Tommy:
```
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__
### Fine tuning info
https://gist.github.com/jondurbin/eda7c4dc9e4459952b47eafb9e4056b2
Earlier checkpoints of adapter model here: https://huggingface.co/jondurbin/airoboros-l2-70b-2.2-checkpoints
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.2 models are built on top of llama-2/codellama.
The llama-2 base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
<!-- original-model-card end -->
|
Nexesenex/bhenrym14_airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GGUF | Nexesenex | 2024-02-04T16:12:02Z | 435 | 2 | null | [
"gguf",
"license:other",
"region:us"
]
| null | 2023-09-17T09:12:07Z | ---
license: other
---
|
TheBloke/Synthia-7B-v1.2-GGUF | TheBloke | 2023-09-27T12:54:36Z | 435 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"arxiv:2306.02707",
"base_model:migtissera/Synthia-7B-v1.2",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-24T03:53:06Z | ---
language:
- en
license: llama2
library_name: transformers
model_name: Synthia 7B V1.2
base_model: migtissera/Synthia-7B-v1.2
inference: false
model_creator: Migel Tissera
model_type: llama
pipeline_tag: text-generation
prompt_template: 'SYSTEM: You are Synthia. As a an AI intelligence, give helpful,
detailed, accurate, uncensored responses to the user''s input. Provide answers factually.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Synthia 7B V1.2 - GGUF
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Synthia 7B V1.2](https://huggingface.co/migtissera/Synthia-7B-v1.2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Migel Tissera's Synthia 7B V1.2](https://huggingface.co/migtissera/Synthia-7B-v1.2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Synthia-7B-v1.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Synthia-7B-v1.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Synthia
```
SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [synthia-7b-v1.2.Q2_K.gguf](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF/blob/main/synthia-7b-v1.2.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [synthia-7b-v1.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF/blob/main/synthia-7b-v1.2.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [synthia-7b-v1.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF/blob/main/synthia-7b-v1.2.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [synthia-7b-v1.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF/blob/main/synthia-7b-v1.2.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [synthia-7b-v1.2.Q4_0.gguf](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF/blob/main/synthia-7b-v1.2.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [synthia-7b-v1.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF/blob/main/synthia-7b-v1.2.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [synthia-7b-v1.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF/blob/main/synthia-7b-v1.2.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [synthia-7b-v1.2.Q5_0.gguf](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF/blob/main/synthia-7b-v1.2.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [synthia-7b-v1.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF/blob/main/synthia-7b-v1.2.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [synthia-7b-v1.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF/blob/main/synthia-7b-v1.2.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [synthia-7b-v1.2.Q6_K.gguf](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF/blob/main/synthia-7b-v1.2.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [synthia-7b-v1.2.Q8_0.gguf](https://huggingface.co/TheBloke/Synthia-7B-v1.2-GGUF/blob/main/synthia-7b-v1.2.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Synthia-7B-v1.2-GGUF and below it, a specific filename to download, such as: synthia-7b-v1.2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Synthia-7B-v1.2-GGUF synthia-7b-v1.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Synthia-7B-v1.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Synthia-7B-v1.2-GGUF synthia-7b-v1.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m synthia-7b-v1.2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Synthia-7B-v1.2-GGUF", model_file="synthia-7b-v1.2.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Migel Tissera's Synthia 7B V1.2
Change from Synthia-7B -> Synthia-7B-v1.2: Capable of generalized Tree of Thought + Chain of Thought reasoning.
All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia.
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
```
Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
```
# Synthia-7B-v1.2
SynthIA (Synthetic Intelligent Agent) is a LLama-2-7B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
<br>

<br>
<br>
#### License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model, and comes with no warranty or gurantees of any kind.
<br>
## Evaluation
We evaluated Synthia-7B-v1.2 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|54.35|
|*hellaswag*|acc_norm|79.29|
|*mmlu*|acc_norm|49.33|
|*truthfulqa_mc*|mc2|48.92|
|**Total Average**|-|**57.97**||
<br>
## Example Usage
### Here is prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:
```
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Synthia-7B-v1.2"
output_file_path = "./Synthia-7B-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{Synthia-7B-v1.2,
author = {Migel Tissera},
title = {Synthia-70B-v1.2b: Synthetic Intelligent Agent},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@software{touvron2023llama,
title={LLaMA2: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
<!-- original-model-card end -->
|
TheBloke/tora-13B-v1.0-GGUF | TheBloke | 2023-10-14T19:36:30Z | 435 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"code",
"math",
"text-generation",
"en",
"dataset:gsm8k",
"dataset:competition_math",
"arxiv:2309.17452",
"base_model:llm-agents/tora-13b-v1.0",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-14T19:30:20Z | ---
base_model: llm-agents/tora-13b-v1.0
datasets:
- gsm8k
- competition_math
inference: false
language:
- en
library_name: transformers
license: llama2
metrics:
- exact_match
model_creator: LLM-Agents
model_name: ToRA 13B v1.0
model_type: llama
pipeline_tag: text-generation
prompt_template: '<|user|>
{prompt}
<|assistant|>
'
quantized_by: TheBloke
tags:
- code
- math
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# ToRA 13B v1.0 - GGUF
- Model creator: [LLM-Agents](https://huggingface.co/llm-agents)
- Original model: [ToRA 13B v1.0](https://huggingface.co/llm-agents/tora-13b-v1.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [LLM-Agents's ToRA 13B v1.0](https://huggingface.co/llm-agents/tora-13b-v1.0).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/tora-13B-v1.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/tora-13B-v1.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF)
* [LLM-Agents's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/llm-agents/tora-13b-v1.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ToRA
```
<|user|>
{prompt}
<|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tora-13b-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [tora-13b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [tora-13b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [tora-13b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [tora-13b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tora-13b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [tora-13b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [tora-13b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tora-13b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [tora-13b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [tora-13b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [tora-13b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/tora-13B-v1.0-GGUF/blob/main/tora-13b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/tora-13B-v1.0-GGUF and below it, a specific filename to download, such as: tora-13b-v1.0.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/tora-13B-v1.0-GGUF tora-13b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/tora-13B-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/tora-13B-v1.0-GGUF tora-13b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m tora-13b-v1.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/tora-13B-v1.0-GGUF", model_file="tora-13b-v1.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: LLM-Agents's ToRA 13B v1.0
<h1 align="center">
ToRA: A Tool-Integrated Reasoning Agent <br> for Mathematical Problem Solving
</h1>
<p align="center">
<a href="https://microsoft.github.io/ToRA/"><b>[🌐 Website]</b></a> •
<a href="https://arxiv.org/abs/2309.17452"><b>[📜 Paper]</b></a> •
<a href="https://huggingface.co/llm-agents"><b>[🤗 HF Models]</b></a> •
<a href="https://github.com/microsoft/ToRA"><b>[🐱 GitHub]</b></a>
<br>
<a href="https://twitter.com/zhs05232838/status/1708860992631763092"><b>[🐦 Twitter]</b></a> •
<a href="https://www.reddit.com/r/LocalLLaMA/comments/1703k6d/tora_a_toolintegrated_reasoning_agent_for/"><b>[💬 Reddit]</b></a> •
<a href="https://notes.aimodels.fyi/researchers-announce-tora-training-language-models-to-better-understand-math-using-external-tools/">[🍀 Unofficial Blog]</a>
<!-- <a href="#-quick-start">Quick Start</a> • -->
<!-- <a href="#%EF%B8%8F-citation">Citation</a> -->
</p>
<p align="center">
Repo for "<a href="https://arxiv.org/abs/2309.17452" target="_blank">ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving</a>"
</p>
## 🔥 News
- [2023/10/08] 🔥🔥🔥 All ToRA models released at [HuggingFace](https://huggingface.co/llm-agents)!!!
- [2023/09/29] ToRA paper, repo, and website released.
## 💡 Introduction
ToRA is a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical reasoning problems by interacting with tools, e.g., computation libraries and symbolic solvers. ToRA series seamlessly integrate natural language reasoning with the utilization of external tools, thereby amalgamating the analytical prowess of language and the computational efficiency of external tools.
| Model | Size | GSM8k | MATH | AVG@10 math tasks<sup>†</sup> |
|---|---|---|---|---|
| GPT-4 | - | 92.0 | 42.5 | 78.3 |
| GPT-4 (PAL) | - | 94.2 | 51.8 | 86.4 |
| [ToRA-7B](https://huggingface.co/llm-agents/tora-7b-v1.0) | 7B | 68.8 | 40.1 | 62.4|
| [ToRA-Code-7B](https://huggingface.co/llm-agents/tora-code-7b-v1.0) | 7B | 72.6 | 44.6 | 66.5|
| [ToRA-13B](https://huggingface.co/llm-agents/tora-13b-v1.0) | 13B | 72.7 | 43.0 | 65.9|
| [ToRA-Code-13B](https://huggingface.co/llm-agents/tora-code-13b-v1.0) | 13B | 75.8 | 48.1 | 71.3 |
| [ToRA-Code-34B<sup>*</sup>](https://huggingface.co/llm-agents/tora-code-34b-v1.0) | 34B | 80.7 | **51.0** | 74.8 |
| [ToRA-70B](https://huggingface.co/llm-agents/tora-70b-v1.0) | 70B | **84.3** | 49.7 | **76.9** |
- <sup>*</sup>ToRA-Code-34B is currently the first and only open-source model to achieve over 50% accuracy (pass@1) on the MATH dataset, which significantly outperforms GPT-4’s CoT result (51.0 vs. 42.5), and is competitive with GPT-4 solving problems with programs. By open-sourcing our codes and models, we hope more breakthroughs will come!
- <sup>†</sup>10 math tasks include GSM8k, MATH, GSM-Hard, SVAMP, TabMWP, ASDiv, SingleEQ, SingleOP, AddSub, and MultiArith.
## ⚡️ Training
The models are trained on ToRA-Corpus 16k, which contains tool-integrated reasoning trajectories of MATH and GSM8k from GPT-4.
We use imitation learning (i.e., SFT) to fine-tune the models, and then apply our proposed *output space shaping* to improve tool-integrated reasoning behaviors. Please refer to the [paper](https://arxiv.org/pdf/2309.17452.pdf) for more details.
## 🪁 Inference & Evaluation
Please refer to ToRA's [GitHub repo](https://github.com/microsoft/ToRA) for inference, evaluation, and training code.
## ☕️ Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{gou2023tora,
title={ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving},
author={Zhibin Gou and Zhihong Shao and Yeyun Gong and yelong shen and Yujiu Yang and Minlie Huang and Nan Duan and Weizhu Chen},
year={2023},
eprint={2309.17452},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- original-model-card end -->
|
echo840/Monkey-Chat | echo840 | 2024-04-06T11:27:42Z | 435 | 8 | transformers | [
"transformers",
"pytorch",
"monkey",
"text-generation",
"custom_code",
"arxiv:2311.06607",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-01-08T07:07:38Z | # Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models
<div align="center">
Zhang Li*, Biao Yang*, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu†, Xiang Bai†
</div>
<div align="center">
<strong>Huazhong University of Science and Technology, Kingsoft</strong>
</div>
<p align="center">
<a href="https://arxiv.org/abs/2311.06607">Paper</a>   |   <a href="http://huggingface.co/datasets/echo840/Detailed_Caption">Detailed Caption</a>   |   <a href="http://huggingface.co/echo840/Monkey">Model Weight</a>   | <a href="https://www.wisemodel.cn/models/HUST-VLRLab/Monkey/">Model Weight in wisemodel</a>  
<!-- |   <a href="Monkey Model">Monkey Models</a>  |   <a href="http://huggingface.co/echo840/Monkey">Tutorial</a> -->
</p>
-----
**Monkey** brings a training-efficient approach to effectively improve the input resolution capacity up to 896 x 1344 pixels without pretraining from the start. To bridge the gap between simple text labels and high input resolution, we propose a multi-level description generation method, which automatically provides rich information that can guide the model to learn the contextual association between scenes and objects. With the synergy of these two designs, our model achieved excellent results on multiple benchmarks. By comparing our model with various LMMs, including GPT4V, our model demonstrates promising performance in image captioning by paying attention to textual information and capturing fine details within the images; its improved input resolution also enables remarkable performance in document images with dense text.
## Spotlights
- **Contextual associations.** Our method demonstrates a superior ability to infer the relationships between targets more effectively when answering questions, which results in delivering more comprehensive and insightful results.
- **Support resolution up to 1344 x 896.** Surpassing the standard 448 x 448 resolution typically employed for LMMs, this significant increase in resolution augments the ability to discern and understand unnoticeable or tightly clustered objects and dense text.
- **Enhanced general performance.** We carried out testing across 16 diverse datasets, leading to impressive performance by our Monkey model in tasks such as Image Captioning, General Visual Question Answering, Text-centric Visual Question Answering, and Document-oriented Visual Question Answering.
## Environment
```python
conda create -n monkey python=3.9
conda activate monkey
git clone https://github.com/Yuliang-Liu/Monkey.git
cd ./Monkey
pip install -r requirements.txt
```
## Demo
Before 14/11/2023, we have observed that for some random pictures Monkey can achieve more accurate results than GPT4V.
We also provide the source code and the model weight for the original demo, allowing you to customize certain parameters for a more unique experience. The specific operations are as follows:
1. Make sure you have configured the [environment](#environment).
2. You can choose to use the demo offline or online:
- **Offline:**
- Download the [Model Weight](http://huggingface.co/echo840/Monkey).
- Modify `DEFAULT_CKPT_PATH="pathto/Monkey"` in the `demo.py` file to your model weight path.
- Run the demo using the following command:
```
python demo.py
```
- **Online:**
- Run the demo and download model weights online with the following command:
```
python demo.py -c echo840/Monkey
```
## Dataset
We have open-sourced the data generated by the multi-level description generation method. You can download it at [Detailed Caption](https://huggingface.co/datasets/echo840/Detailed_Caption).
## Evaluate
We offer evaluation code for 14 Visual Question Answering (VQA) datasets in the `evaluate_vqa.py` file, facilitating a quick verification of results. The specific operations are as follows:
1. Make sure you have configured the [environment](#environment).
2. Modify `sys.path.append("pathto/Monkey")` to your model weight path.
3. Prepare the datasets required for evaluation.
4. Run the evaluation code.
Take ESTVQA as an example:
- Prepare data according to the following directory structure:
```
├── data
| ├── estvqa
| ├── test_image
| ├── {image_path0}
| ├── {image_path1}
| ·
| ·
| ├── estvqa.jsonl
```
- Example of the format of each line of the annotated `.jsonl` file:
```
{"image": "data/estvqa/test_image/011364.jpg", "question": "What is this store?", "answer": "pizzeria", "question_id": 0}
```
- Modify the dictionary `ds_collections`:
```
ds_collections = {
'estvqa_test': {
'test': 'data/estvqa/estvqa.jsonl',
'metric': 'anls',
'max_new_tokens': 100,
},
...
}
```
- Run the following command:
```
bash eval/eval.sh 'EVAL_PTH' 'SAVE_NAME'
```
## Train
We also offer Monkey's model definition and training code, which you can explore above. You can execute the training code through executing `finetune_ds_debug.sh`.
**ATTENTION:** Specify the path to your training data, which should be a json file consisting of a list of conversations.
## Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "echo840/Monkey-Chat"
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map='cuda', trust_remote_code=True).eval()
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
tokenizer.padding_side = 'left'
tokenizer.pad_token_id = tokenizer.eod_id
img_path = ""
question = ""
#Monkey-Chat has the same prompt format for both vqa and detailed caption.
query = f'<img>{img_path}</img> {question} Answer: '
input_ids = tokenizer(query, return_tensors='pt', padding='longest')
attention_mask = input_ids.attention_mask
input_ids = input_ids.input_ids
pred = model.generate(
input_ids=input_ids.cuda(),
attention_mask=attention_mask.cuda(),
do_sample=False,
num_beams=1,
max_new_tokens=512,
min_new_tokens=1,
length_penalty=1,
num_return_sequences=1,
output_hidden_states=True,
use_cache=True,
pad_token_id=tokenizer.eod_id,
eos_token_id=tokenizer.eod_id,
)
response = tokenizer.decode(pred[0][input_ids.size(1):].cpu(), skip_special_tokens=True).strip()
print(response)
```
## Citing Monkey
If you wish to refer to the baseline results published here, please use the following BibTeX entries:
```BibTeX
@article{li2023monkey,
title={Monkey: Image Resolution and Text Label Are Important Things for Large Multi-modal Models},
author={Li, Zhang and Yang, Biao and Liu, Qiang and Ma, Zhiyin and Zhang, Shuo and Yang, Jingxu and Sun, Yabo and Liu, Yuliang and Bai, Xiang},
journal={arXiv preprint arXiv:2311.06607},
year={2023}
}
```
If you find the Monkey cute, please star. It would be a great encouragement for us.
## Acknowledgement
[Qwen-VL](https://github.com/QwenLM/Qwen-VL.git): the codebase we built upon. Thanks for the authors of Qwen for providing the framework.
## Copyright
We welcome suggestions to help us improve the Monkey. For any query, please contact Dr. Yuliang Liu: [email protected]. If you find something interesting, please also feel free to share with us through email or open an issue. Thanks!
|
MaziyarPanahi/gemma-2b-GGUF | MaziyarPanahi | 2024-02-27T15:40:55Z | 435 | 5 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"gemma",
"text-generation",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us",
"base_model:google/gemma-2b"
]
| text-generation | 2024-02-21T14:02:17Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- gguf
- gemma
- text-generation
- arxiv:2312.11805
- arxiv:2009.03300
- arxiv:1905.07830
- arxiv:1911.11641
- arxiv:1904.09728
- arxiv:1905.10044
- arxiv:1907.10641
- arxiv:1811.00937
- arxiv:1809.02789
- arxiv:1911.01547
- arxiv:1705.03551
- arxiv:2107.03374
- arxiv:2108.07732
- arxiv:2110.14168
- arxiv:2304.06364
- arxiv:2206.04615
- arxiv:1804.06876
- arxiv:2110.08193
- arxiv:2009.11462
- arxiv:2101.11718
- arxiv:1804.09301
- arxiv:2109.07958
- arxiv:2203.09509
- license:other
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
- text-generation
model_name: gemma-2b-GGUF
base_model: google/gemma-2b
inference: false
model_creator: google
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/gemma-2b-GGUF](https://huggingface.co/MaziyarPanahi/gemma-2b-GGUF)
- Model creator: [google](https://huggingface.co/google)
- Original model: [google/gemma-2b](https://huggingface.co/google/gemma-2b)
## Description
[MaziyarPanahi/gemma-2b-GGUF](https://huggingface.co/MaziyarPanahi/gemma-2b-GGUF) contains GGUF format model files for [google/gemma-2b](https://huggingface.co/google/gemma-2b).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/gemma-2b-GGUF](https://huggingface.co/MaziyarPanahi/gemma-2b-GGUF) and below it, a specific filename to download, such as: gemma-2b-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/gemma-2b-GGUF gemma-2b-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/gemma-2b-GGUF](https://huggingface.co/MaziyarPanahi/gemma-2b-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/gemma-2b-GGUF gemma-2b-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m gemma-2b-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./gemma-2b-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./gemma-2b-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
llmixer/BigWeave-v29-122b | llmixer | 2024-03-08T07:55:32Z | 435 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"frankenmerge",
"122b",
"en",
"base_model:152334H/miqu-1-70b-sf",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-07T23:31:58Z | ---
base_model:
- 152334H/miqu-1-70b-sf
license: unknown
language:
- en
pipeline_tag: text-generation
tags:
- merge
- frankenmerge
- 122b
---
# BigWeave v29 122b
<img src="https://cdn-uploads.huggingface.co/production/uploads/65a6db055c58475cf9e6def1/4CbbAN-X7ZWj702JrcCGH.png" width=600>
The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.
# Prompting Format
Chatml, Mistral, Vicuna.
# Merge process
This is a self-merge of 152334H/miqu-1-70b-sf. Layers are repeated in groups of 4 with a 2 layer overlap. The first and last 8/9 layers are not repeated.
Merge configuration:
```
slices:
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [0,11]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [9,13]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [11,15]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [13,17]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [15,19]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [17,21]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [19,23]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [21,25]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [23,27]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [25,29]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [27,31]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [29,33]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [31,35]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [33,37]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [35,39]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [37,41]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [39,43]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [41,45]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [43,47]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [45,49]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [47,51]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [49,53]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [51,55]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [53,57]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [55,59]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [57,61]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [59,63]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [61,65]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [63,67]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [65,69]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [67,71]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [69,80]
merge_method: passthrough
dtype: float16
``` |
mradermacher/MaidFlameSoup-7B-GGUF | mradermacher | 2024-05-06T05:15:34Z | 435 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:nbeerbower/MaidFlameSoup-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-05T08:10:09Z | ---
base_model: nbeerbower/MaidFlameSoup-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/nbeerbower/MaidFlameSoup-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MaidFlameSoup-7B-GGUF/resolve/main/MaidFlameSoup-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Norod78/sdxl-emoji-lora | Norod78 | 2024-04-11T08:59:21Z | 435 | 7 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"emoji",
"style",
"sdxl style lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
]
| text-to-image | 2024-04-11T07:44:31Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- emoji
- style
- sdxl style lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: flat, emoji
widget:
- text: 'The girl with a pearl earring flat '
output:
url: >-
2467685.jpeg
- text: 'The girl with a pearl earring flat '
output:
url: >-
2467686.jpeg
- text: 'American gothic flat '
output:
url: >-
2467697.jpeg
- text: 'American gothic flat '
output:
url: >-
2467698.jpeg
- text: 'Gal Gadot as wonderwoman flat '
output:
url: >-
2487283.jpeg
- text: 'Gal Gadot as wonderwoman flat '
output:
url: >-
2487285.jpeg
---
# SDXL Emoji LoRA
<Gallery />
([CivitAI](https://civitai.com/models/144245))
## Model description
<p>A small (<30 MB) rank 4 LoRA for SDXL which can help you style your images as Emojis.</p><p>It can make "Regular 3D emojis" or "Flat emojis"</p><h3 id="heading-7"><span style="color:rgb(250, 82, 82)">Please see the LoRA weight scale <-> Trigger word chart below, before using this model</span></h3><p><strong>This model is very sensitive to having the correct LoRA weight scaling per Trigger word (Not to be confused with Guidance scale)</strong></p><p></p><p>I intended for the trigger words "Emoji" and "Flat" to be mutually exclusive. So when "Emoji" is part of your prompt, do not have "Flat" in your prompt and vice versa. When you use "Flat", give a LoRA scale of 0.7-0.8. Alternatively, if you chose "Emoji" to be in your prompt, give a lower LoRA scale of 0.2-0.5.<br /><br /> In the Showcase you can see various prompts examples by pressing the little (i) icon. <br />For example:<br />"Gal Gadot as wonderwoman flat <lora:SDXL-Emoji-Lora-r4:0.7>"<br />Which has "Gal Gadot as wonderwoman" as the prompt "flat" as the trigger word and 0.7 as the LoRA scale</p><p>In the image below, the same seed was used. Also the same basic prompt. What changed is the "Trigger" word and the "LoRA weight scale"</p><p></p><p><img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/4249127a-6674-4d1c-bf87-c2645f81d458/width=525/4249127a-6674-4d1c-bf87-c2645f81d458.jpeg" /></p><p><strong>Note that in order to get a proper result, you need to have a relatively low LoRA weight scale for the "Emoji" style (In the 0.2-0.5 range) and for the "Flat" style you would want a higher weight scale (In the 0.7-0.8 range)</strong></p><p></p>
## Trigger words
You should use `emoji`, `flat` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Norod78/sdxl-emoji-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Norod78/sdxl-emoji-lora', weight_name='SDXL-Emoji-Lora-r4.safetensors')
image = pipeline('Gal Gadot as wonderwoman flat ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
prs-eth/marigold-normals-v0-1 | prs-eth | 2024-05-09T13:57:06Z | 435 | 1 | diffusers | [
"diffusers",
"safetensors",
"monocular normals estimation",
"single image normals estimation",
"normals",
"in-the-wild",
"zero-shot",
"normals-estimation",
"en",
"arxiv:2312.02145",
"license:apache-2.0",
"diffusers:MarigoldPipeline",
"region:us"
]
| null | 2024-04-18T15:32:37Z | ---
license: apache-2.0
language:
- en
pipeline_tag: normals-estimation
tags:
- monocular normals estimation
- single image normals estimation
- normals
- in-the-wild
- zero-shot
---
# Marigold Normals Model Card
This model belongs to the family of diffusion-based Marigold models for solving various computer vision tasks.
The Marigold Normals model focuses on the surface normals task.
It takes an input image and computes surface normals in each pixel.
The Marigold Normals model is trained from Stable Diffusion with synthetic data.
Thanks to the rich visual knowledge stored in Stable Diffusion, Marigold models possess deep scene understanding and excel at solving computer vision tasks.
Read more about Marigold in our paper titled "Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation".
[](https://marigoldmonodepth.github.io)
[](https://github.com/prs-eth/Marigold)
[](https://arxiv.org/abs/2312.02145)
[](https://huggingface.co/spaces/toshas/marigold)
Developed by:
[Bingxin Ke](http://www.kebingxin.com/),
[Anton Obukhov](https://www.obukhov.ai/),
[Shengyu Huang](https://shengyuh.github.io/),
[Nando Metzger](https://nandometzger.github.io/),
[Rodrigo Caye Daudt](https://rcdaudt.github.io/),
[Konrad Schindler](https://scholar.google.com/citations?user=FZuNgqIAAAAJ&hl=en)

## 🎓 Citation
```bibtex
@InProceedings{ke2023repurposing,
title={Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation},
author={Bingxin Ke and Anton Obukhov and Shengyu Huang and Nando Metzger and Rodrigo Caye Daudt and Konrad Schindler},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
```
## 🎫 License
This work is licensed under the Apache License, Version 2.0 (as defined in the [LICENSE](LICENSE.txt)).
By downloading and using the code and model you agree to the terms in the [LICENSE](LICENSE.txt).
[](https://www.apache.org/licenses/LICENSE-2.0)
|
bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF | bartowski | 2024-04-25T20:17:21Z | 435 | 1 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-04-25T19:57:31Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: other
license_name: llama3
license_link: LICENSE
extra_gated_prompt: >-
### META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Meta Llama 3 Version Release Date: April 18, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3
distributed by Meta at https://llama.meta.com/get-started/.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Meta Llama 3" means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any
portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta
Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you
use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is
distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model
name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is
licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to
improve any other large language model (excluding Meta Llama 3 or derivative works thereof).
2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.
### Meta Llama 3 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you
access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)
#### Prohibited Uses
We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow
others to use, Meta Llama 3 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Meta Llama 3 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Meta-Llama-3-8B-Instruct-64k
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2717">b2717</a> for quantization.
Original model: https://huggingface.co/NurtureAI/Meta-Llama-3-8B-Instruct-64k
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Meta-Llama-3-8B-Instruct-64k-Q8_0.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Meta-Llama-3-8B-Instruct-64k-Q6_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Meta-Llama-3-8B-Instruct-64k-Q5_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Meta-Llama-3-8B-Instruct-64k-Q5_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Meta-Llama-3-8B-Instruct-64k-Q4_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Meta-Llama-3-8B-Instruct-64k-Q4_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Meta-Llama-3-8B-Instruct-64k-IQ4_NL.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Meta-Llama-3-8B-Instruct-64k-IQ4_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Meta-Llama-3-8B-Instruct-64k-Q3_K_L.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Meta-Llama-3-8B-Instruct-64k-Q3_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Meta-Llama-3-8B-Instruct-64k-IQ3_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Meta-Llama-3-8B-Instruct-64k-IQ3_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Meta-Llama-3-8B-Instruct-64k-Q3_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Meta-Llama-3-8B-Instruct-64k-IQ3_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Meta-Llama-3-8B-Instruct-64k-IQ3_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Meta-Llama-3-8B-Instruct-64k-Q2_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Meta-Llama-3-8B-Instruct-64k-IQ2_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Meta-Llama-3-8B-Instruct-64k-IQ2_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Meta-Llama-3-8B-Instruct-64k-IQ2_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Meta-Llama-3-8B-Instruct-64k-IQ2_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Meta-Llama-3-8B-Instruct-64k-IQ1_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Meta-Llama-3-8B-Instruct-64k-IQ1_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/Meta-Llama-3-8B-Instruct-64k-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/Maverick-v2.0-GGUF | mradermacher | 2024-05-05T14:57:31Z | 435 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:shyamieee/Maverick-v2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-30T13:33:24Z | ---
base_model: shyamieee/Maverick-v2.0
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/shyamieee/Maverick-v2.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Maverick-v2.0-GGUF/resolve/main/Maverick-v2.0.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bartowski/codegemma-1.1-7b-it-GGUF | bartowski | 2024-05-10T16:47:11Z | 435 | 3 | transformers | [
"transformers",
"gguf",
"text-generation",
"license:gemma",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-03T23:58:28Z | ---
library_name: transformers
extra_gated_heading: Access CodeGemma on Hugging Face
extra_gated_prompt: >-
To access CodeGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
pipeline_tag: text-generation
widget:
- text: >
<start_of_turn>user
Write a Python function to calculate the nth fibonacci number.<end_of_turn>
<start_of_turn>model
inference:
parameters:
max_new_tokens: 200
license: gemma
license_link: https://ai.google.dev/gemma/terms
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of codegemma-1.1-7b-it
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2828">b2828</a> for quantization.
Original model: https://huggingface.co/google/codegemma-1.1-7b-it
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<bos><start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
<end_of_turn>
<start_of_turn>model
```
Note that this model does not support a System prompt.
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [codegemma-1.1-7b-it-Q8_0.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-Q8_0.gguf) | Q8_0 | 9.07GB | Extremely high quality, generally unneeded but max available quant. |
| [codegemma-1.1-7b-it-Q6_K.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-Q6_K.gguf) | Q6_K | 7.01GB | Very high quality, near perfect, *recommended*. |
| [codegemma-1.1-7b-it-Q5_K_M.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-Q5_K_M.gguf) | Q5_K_M | 6.14GB | High quality, *recommended*. |
| [codegemma-1.1-7b-it-Q5_K_S.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-Q5_K_S.gguf) | Q5_K_S | 5.98GB | High quality, *recommended*. |
| [codegemma-1.1-7b-it-Q4_K_M.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-Q4_K_M.gguf) | Q4_K_M | 5.32GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [codegemma-1.1-7b-it-Q4_K_S.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-Q4_K_S.gguf) | Q4_K_S | 5.04GB | Slightly lower quality with more space savings, *recommended*. |
| [codegemma-1.1-7b-it-IQ4_NL.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-IQ4_NL.gguf) | IQ4_NL | 5.01GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [codegemma-1.1-7b-it-IQ4_XS.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-IQ4_XS.gguf) | IQ4_XS | 4.76GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [codegemma-1.1-7b-it-Q3_K_L.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-Q3_K_L.gguf) | Q3_K_L | 4.70GB | Lower quality but usable, good for low RAM availability. |
| [codegemma-1.1-7b-it-Q3_K_M.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-Q3_K_M.gguf) | Q3_K_M | 4.36GB | Even lower quality. |
| [codegemma-1.1-7b-it-IQ3_M.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-IQ3_M.gguf) | IQ3_M | 4.10GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [codegemma-1.1-7b-it-IQ3_S.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-IQ3_S.gguf) | IQ3_S | 3.98GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [codegemma-1.1-7b-it-Q3_K_S.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-Q3_K_S.gguf) | Q3_K_S | 3.98GB | Low quality, not recommended. |
| [codegemma-1.1-7b-it-IQ3_XS.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-IQ3_XS.gguf) | IQ3_XS | 3.80GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [codegemma-1.1-7b-it-IQ3_XXS.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-IQ3_XXS.gguf) | IQ3_XXS | 3.48GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [codegemma-1.1-7b-it-Q2_K.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-Q2_K.gguf) | Q2_K | 3.48GB | Very low quality but surprisingly usable. |
| [codegemma-1.1-7b-it-IQ2_M.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-IQ2_M.gguf) | IQ2_M | 3.13GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [codegemma-1.1-7b-it-IQ2_S.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-IQ2_S.gguf) | IQ2_S | 2.91GB | Very low quality, uses SOTA techniques to be usable. |
| [codegemma-1.1-7b-it-IQ2_XS.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-IQ2_XS.gguf) | IQ2_XS | 2.81GB | Very low quality, uses SOTA techniques to be usable. |
| [codegemma-1.1-7b-it-IQ2_XXS.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-IQ2_XXS.gguf) | IQ2_XXS | 2.58GB | Lower quality, uses SOTA techniques to be usable. |
| [codegemma-1.1-7b-it-IQ1_M.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-IQ1_M.gguf) | IQ1_M | 2.32GB | Extremely low quality, *not* recommended. |
| [codegemma-1.1-7b-it-IQ1_S.gguf](https://huggingface.co/bartowski/codegemma-1.1-7b-it-GGUF/blob/main/codegemma-1.1-7b-it-IQ1_S.gguf) | IQ1_S | 2.16GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/codegemma-1.1-7b-it-GGUF --include "codegemma-1.1-7b-it-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/codegemma-1.1-7b-it-GGUF --include "codegemma-1.1-7b-it-Q8_0.gguf/*" --local-dir codegemma-1.1-7b-it-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (codegemma-1.1-7b-it-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF | mradermacher | 2024-05-05T14:50:49Z | 435 | 1 | transformers | [
"transformers",
"gguf",
"meta",
"llama-3",
"en",
"base_model:abacusai/Llama-3-Giraffe-70B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-04T06:12:11Z | ---
base_model: abacusai/Llama-3-Giraffe-70B-Instruct
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- meta
- llama-3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/abacusai/Llama-3-Giraffe-70B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Giraffe-70B-Instruct-i1-GGUF/resolve/main/Llama-3-Giraffe-70B-Instruct.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Rachata/Rai1.8b_th_pt | Rachata | 2024-05-21T06:12:24Z | 435 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-12T15:17:26Z | Entry not found |
mradermacher/mistral-orthogonalized-GGUF | mradermacher | 2024-05-14T19:09:56Z | 435 | 1 | transformers | [
"transformers",
"gguf",
"mistral ",
"en",
"base_model:cosmicvalor/mistral-orthogonalized",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-12T18:45:42Z | ---
base_model: cosmicvalor/mistral-orthogonalized
language:
- en
library_name: transformers
license: apache-2.0
no_imatrix: nan
quantized_by: mradermacher
tags:
- 'mistral '
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/cosmicvalor/mistral-orthogonalized
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-orthogonalized-GGUF/resolve/main/mistral-orthogonalized.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf | RichardErkhov | 2024-05-21T19:57:48Z | 435 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-21T17:03:23Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
jaskier-7b-dpo-v5.6 - GGUF
- Model creator: https://huggingface.co/bardsai/
- Original model: https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [jaskier-7b-dpo-v5.6.Q2_K.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q2_K.gguf) | Q2_K | 2.53GB |
| [jaskier-7b-dpo-v5.6.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [jaskier-7b-dpo-v5.6.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [jaskier-7b-dpo-v5.6.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [jaskier-7b-dpo-v5.6.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [jaskier-7b-dpo-v5.6.Q3_K.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q3_K.gguf) | Q3_K | 3.28GB |
| [jaskier-7b-dpo-v5.6.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [jaskier-7b-dpo-v5.6.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [jaskier-7b-dpo-v5.6.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [jaskier-7b-dpo-v5.6.Q4_0.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q4_0.gguf) | Q4_0 | 3.83GB |
| [jaskier-7b-dpo-v5.6.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [jaskier-7b-dpo-v5.6.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [jaskier-7b-dpo-v5.6.Q4_K.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q4_K.gguf) | Q4_K | 4.07GB |
| [jaskier-7b-dpo-v5.6.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [jaskier-7b-dpo-v5.6.Q4_1.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q4_1.gguf) | Q4_1 | 4.24GB |
| [jaskier-7b-dpo-v5.6.Q5_0.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q5_0.gguf) | Q5_0 | 4.65GB |
| [jaskier-7b-dpo-v5.6.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [jaskier-7b-dpo-v5.6.Q5_K.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q5_K.gguf) | Q5_K | 4.78GB |
| [jaskier-7b-dpo-v5.6.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [jaskier-7b-dpo-v5.6.Q5_1.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q5_1.gguf) | Q5_1 | 5.07GB |
| [jaskier-7b-dpo-v5.6.Q6_K.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q6_K.gguf) | Q6_K | 5.53GB |
| [jaskier-7b-dpo-v5.6.Q8_0.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v5.6-gguf/blob/main/jaskier-7b-dpo-v5.6.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
tags:
- llm
- 7b
license: cc-by-4.0
language:
- en
datasets:
- argilla/distilabel-math-preference-dpo
---
# Jaskier-7b-dpo-v5.6
**This is work-in-progress model, may not be ready for production use**
<figure>

</figure>
Model based on `paulml/OGNO-7B` (downstream version of Mistral7B) finetuned using Direct Preference Optimization on argilla/distilabel-math-preference-dpo.
## How to use
You can use this model directly with a Hugging Face pipeline:
```python
from transformers import pipeline, Conversation
import torch
base_model_name = "bardsai/jaskier-7b-dpo-v5.6"
chatbot = pipeline("conversational", model=base_model_name, torch_dtype=torch.float16, device_map="auto")
conversation = Conversation("Is bard an ML engineer?")
conversation = chatbot(conversation)
print(conversation.messages[-1]["content"])
```
## Output
"There is no direct personal connection between the concept of a "bard" and an "ML engineer." A bard is a mythical or literary figure, often a storyteller or musician, while an ML engineer refers to a Machine Learning engineer, a professional in the tech industry. They are unrelated entities, one fictional and the other a real-world occupation."
If you still find any issues with "INST" character chain appearing in generated output, try our newest model: https://huggingface.co/bardsai/jaskier-7b-dpo-v6.1 . Re-tasking the prompt can also help.
## Changelog
- 2024-02-16: Initial release
## About bards.ai
At bards.ai, we focus on providing machine learning expertise and skills to our partners, particularly in the areas of nlp, machine vision and time series analysis. Our team is located in Wroclaw, Poland. Please visit our website for more information: bards.ai
Let us know if you use our model :). Also, if you need any help, feel free to contact us at [email protected]
|
magiccpp/open_llama_3b_v2-Q8_0-GGUF | magiccpp | 2024-06-12T07:01:21Z | 435 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"base_model:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-12T07:01:05Z | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
base_model: openlm-research/open_llama_3b_v2
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
---
# magiccpp/open_llama_3b_v2-Q8_0-GGUF
This model was converted to GGUF format from [`openlm-research/open_llama_3b_v2`](https://huggingface.co/openlm-research/open_llama_3b_v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/openlm-research/open_llama_3b_v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo magiccpp/open_llama_3b_v2-Q8_0-GGUF --hf-file open_llama_3b_v2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo magiccpp/open_llama_3b_v2-Q8_0-GGUF --hf-file open_llama_3b_v2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo magiccpp/open_llama_3b_v2-Q8_0-GGUF --hf-file open_llama_3b_v2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo magiccpp/open_llama_3b_v2-Q8_0-GGUF --hf-file open_llama_3b_v2-q8_0.gguf -c 2048
```
|
RichardErkhov/Qwen_-_Qwen2-72B-gguf | RichardErkhov | 2024-06-17T13:00:28Z | 435 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-17T01:07:19Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-72B - GGUF
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen2-72B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2-72B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/blob/main/Qwen2-72B.Q2_K.gguf) | Q2_K | 27.76GB |
| [Qwen2-72B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/blob/main/Qwen2-72B.IQ3_XS.gguf) | IQ3_XS | 30.59GB |
| [Qwen2-72B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/blob/main/Qwen2-72B.IQ3_S.gguf) | IQ3_S | 32.12GB |
| [Qwen2-72B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/blob/main/Qwen2-72B.Q3_K_S.gguf) | Q3_K_S | 32.12GB |
| [Qwen2-72B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/blob/main/Qwen2-72B.IQ3_M.gguf) | IQ3_M | 33.07GB |
| [Qwen2-72B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/blob/main/Qwen2-72B.Q3_K.gguf) | Q3_K | 35.11GB |
| [Qwen2-72B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/blob/main/Qwen2-72B.Q3_K_M.gguf) | Q3_K_M | 35.11GB |
| [Qwen2-72B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/blob/main/Qwen2-72B.Q3_K_L.gguf) | Q3_K_L | 36.79GB |
| [Qwen2-72B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | IQ4_XS | 37.4GB |
| [Qwen2-72B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | Q4_0 | 38.4GB |
| [Qwen2-72B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | IQ4_NL | 38.9GB |
| [Qwen2-72B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | Q4_K_S | 40.88GB |
| [Qwen2-72B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | Q4_K | 44.16GB |
| [Qwen2-72B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | Q4_K_M | 44.16GB |
| [Qwen2-72B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | Q4_1 | 42.56GB |
| [Qwen2-72B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | Q5_0 | 46.72GB |
| [Qwen2-72B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | Q5_K_S | 47.85GB |
| [Qwen2-72B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | Q5_K | 50.71GB |
| [Qwen2-72B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | Q5_K_M | 50.71GB |
| [Qwen2-72B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | Q5_1 | 50.88GB |
| [Qwen2-72B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | Q6_K | 59.93GB |
| [Qwen2-72B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-gguf/tree/main/) | Q8_0 | 71.96GB |
Original model description:
---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen2-72B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
---
# Qwen2-72B
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 72B Qwen2 base language model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
## Performance
The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc.
The datasets for evaluation include:
**English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot)
**Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript)
**Math Tasks**: GSM8K (4-shot), MATH (4-shot)
**Chinese Tasks**: C-Eval (5-shot), CMMLU (5-shot)
**Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot)
#### Qwen2-72B performance
| Datasets | DeepSeek-V2 | Mixtral-8x22B | Llama-3-70B | Qwen1.5-72B | Qwen1.5-110B | **Qwen2-72B** |
| :--------| :---------: | :------------: | :------------: | :------------: | :------------: |:------------: |
|Architecture | MoE | MoE | Dense | Dense | Dense | Dense |
|#Activated Params | 21B | 39B | 70B | 72B | 110B | 72B |
|#Params | 236B | 140B | 70B | 72B | 110B | 72B|
| ***English*** | | | | | | |
|MMLU |78.5 | 77.8 | 79.5 | 77.5 | 80.4 | **84.2** |
|MMLU-Pro | - | 49.5 | 52.8 | 45.8 | 49.4 | **55.6** |
|GPQA | -| 34.3 | 36.3 | 36.3 | 35.9 | **37.9** |
|Theorem QA | -| 35.9 | 32.3 | 29.3 | 34.9 | **43.1** |
|BBH | 78.9 |78.9 | 81.0 | 65.5 | 74.8 | **82.4** |
|HellaSwag | 87.8 | **88.7** | 88.0 | 86.0 | 87.5 | 87.6 |
|WindoGrande | 84.8|85.0 | **85.3** | 83.0 | 83.5 | 85.1 |
|ARC-C | 70.0| **70.7** | 68.8 | 65.9 | 69.6 | 68.9 |
|TruthfulQA | 42.2 | 51.0 | 45.6 | **59.6** | 49.6 | 54.8 |
| ***Coding*** | | | | | | |
|HumanEval | 45.7 | 46.3 | 48.2 | 46.3 | 54.3 | **64.6** |
|MBPP |73.9 | 71.7 | 70.4 | 66.9 | 70.9 | **76.9** |
|EvalPlus | 55.0 | 54.1 | 54.8 | 52.9 | 57.7 | **65.4** |
|MultiPL-E |44.4 | 46.7 | 46.3 | 41.8 | 52.7 | **59.6** |
| ***Mathematics*** | | | | | | |
|GSM8K | 79.2 | 83.7 | 83.0 | 79.5 | 85.4 | **89.5** |
|MATH | 43.6 | 41.7 | 42.5 | 34.1 | 49.6 | **51.1** |
| ***Chinese*** | | | | | | |
|C-Eval | 81.7 | 54.6 | 65.2 | 84.1 | 89.1 | **91.0** |
|CMMLU | 84.0 | 53.4 | 67.2 | 83.5 | 88.3 | **90.1** |
| ***Multilingual*** | | | | | | |
|Mulit-Exam | 67.5 | 63.5 | 70.0 | 66.4 | 75.6 | **76.6** |
|Multi-Understanding | 77.0 | 77.7 | 79.9 | 78.2 | 78.2 | **80.7** |
|Multi-Mathematics | 58.8 | 62.9 | 67.1 | 61.7 | 64.4 | **76.0** |
|Multi-Translation | 36.0 | 23.3 | **38.0** | 35.6 | 36.2 | 37.8 |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
```
|
Klevin/J.A.R.V.I.S-v2.0-Q4_K_M-GGUF | Klevin | 2024-06-20T08:00:40Z | 435 | 2 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Klevin/J.A.R.V.I.S-v2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-20T08:00:19Z | ---
base_model: Klevin/J.A.R.V.I.S-v2.0
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# Klevin/J.A.R.V.I.S-v2.0-Q4_K_M-GGUF
This model was converted to GGUF format from [`Klevin/J.A.R.V.I.S-v2.0`](https://huggingface.co/Klevin/J.A.R.V.I.S-v2.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Klevin/J.A.R.V.I.S-v2.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Klevin/J.A.R.V.I.S-v2.0-Q4_K_M-GGUF --hf-file j.a.r.v.i.s-v2.0-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Klevin/J.A.R.V.I.S-v2.0-Q4_K_M-GGUF --hf-file j.a.r.v.i.s-v2.0-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Klevin/J.A.R.V.I.S-v2.0-Q4_K_M-GGUF --hf-file j.a.r.v.i.s-v2.0-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Klevin/J.A.R.V.I.S-v2.0-Q4_K_M-GGUF --hf-file j.a.r.v.i.s-v2.0-q4_k_m.gguf -c 2048
```
|
jkodiyil/valcal_q4_k_m_guff | jkodiyil | 2024-06-27T11:10:07Z | 435 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-25T23:12:56Z | ---
base_model: unsloth/tinyllama-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** jkodiyil
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Helsinki-NLP/opus-mt-ga-en | Helsinki-NLP | 2023-08-16T11:37:45Z | 434 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ga",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
language:
- ga
- en
tags:
- translation
license: apache-2.0
---
### gle-eng
* source group: Irish
* target group: English
* OPUS readme: [gle-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md)
* model: transformer-align
* source language(s): gle
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.gle.eng | 51.6 | 0.672 |
### System Info:
- hf_name: gle-eng
- source_languages: gle
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ga', 'en']
- src_constituents: {'gle'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt
- src_alpha3: gle
- tgt_alpha3: eng
- short_pair: ga-en
- chrF2_score: 0.672
- bleu: 51.6
- brevity_penalty: 1.0
- ref_len: 11247.0
- src_name: Irish
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: ga
- tgt_alpha2: en
- prefer_old: False
- long_pair: gle-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
almanach/camembert-base-wikipedia-4gb | almanach | 2020-12-11T21:35:21Z | 434 | 0 | transformers | [
"transformers",
"pytorch",
"camembert",
"fr",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | ---
language: fr
---
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-wikipedia-4gb")
camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-wikipedia-4gb", tokenizer="camembert/camembert-base-wikipedia-4gb")
results = camembert_fill_mask("Le camembert est un fromage de <mask>!")
# results
#[{'sequence': '<s> Le camembert est un fromage de chèvre!</s>', 'score': 0.4937814474105835, 'token': 19370},
#{'sequence': '<s> Le camembert est un fromage de brebis!</s>', 'score': 0.06255942583084106, 'token': 30616},
#{'sequence': '<s> Le camembert est un fromage de montagne!</s>', 'score': 0.04340197145938873, 'token': 2364},
# {'sequence': '<s> Le camembert est un fromage de Noël!</s>', 'score': 0.02823255956172943, 'token': 3236},
#{'sequence': '<s> Le camembert est un fromage de vache!</s>', 'score': 0.021357402205467224, 'token': 12329}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 221, 10, 10600, 14, 8952, 10540, 75, 1114, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
#tensor([[[-0.0928, 0.0506, -0.0094, ..., -0.2388, 0.1177, -0.1302],
# [ 0.0662, 0.1030, -0.2355, ..., -0.4224, -0.0574, -0.2802],
# [-0.0729, 0.0547, 0.0192, ..., -0.1743, 0.0998, -0.2677],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-base-wikipedia-4gb", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[-0.0059, -0.0227, 0.0065, ..., -0.0770, 0.0369, 0.0095],
# [ 0.2838, -0.1531, -0.3642, ..., -0.0027, -0.8502, -0.7914],
# [-0.0073, -0.0338, -0.0011, ..., 0.0533, -0.0250, -0.0061],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
sail/poolformer_s12 | sail | 2023-09-19T02:07:38Z | 434 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"poolformer",
"image-classification",
"vision",
"dataset:imagenet",
"arxiv:2111.11418",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
---
# PoolFormer (S12 model)
PoolFormer model trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu et al. and first released in [this repository](https://github.com/sail-sg/poolformer).
## Model description
PoolFormer is a model that replaces attention token mixer in transfomrers with extremely simple operator, pooling.
Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=sail/poolformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import PoolFormerFeatureExtractor, PoolFormerForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = PoolFormerFeatureExtractor.from_pretrained('sail/poolformer_s12')
model = PoolFormerForImageClassification.from_pretrained('sail/poolformer_s12')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The poolformer model was trained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/sail-sg/poolformer/blob/main/train.py#L529-L572).
### Pretraining
The model was trained on TPU-v3s. Training resolution is 224. For all hyperparameters (such as batch size and learning rate), please refer to the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | # params | URL |
|---------------------------------------|-------------------------|----------|------------------------------------------------------------------|
| **PoolFormer-S12** | **77.2** | **12M** | **https://huggingface.co/sail/poolformer_s12** |
| PoolFormer-S24 | 80.3 | 21M | https://huggingface.co/sail/poolformer_s24 |
| PoolFormer-S36 | 81.4 | 31M | https://huggingface.co/sail/poolformer_s36 |
| PoolFormer-M36 | 82.1 | 56M | https://huggingface.co/sail/poolformer_m36 |
| PoolFormer-M48 | 82.5 | 73M | https://huggingface.co/sail/poolformer_m48 |
### BibTeX entry and citation info
```bibtex
@article{yu2021metaformer,
title={MetaFormer is Actually What You Need for Vision},
author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},
journal={arXiv preprint arXiv:2111.11418},
year={2021}
}
``` |
dallinmackay/Van-Gogh-diffusion | dallinmackay | 2023-05-16T09:25:20Z | 434 | 277 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2022-11-05T00:26:09Z | ---
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/dallinmackay/Van-Gogh-diffusion/resolve/main/preview1.jpg"
tags:
- stable-diffusion
- text-to-image
---
### Van Gogh Diffusion
v2 - fixed and working
This is a fine-tuned Stable Diffusion model (based on v1.5) trained on screenshots from the film **_Loving Vincent_**. Use the token **_lvngvncnt_** at the BEGINNING of your prompts to use the style (e.g., "lvngvncnt, beautiful woman at sunset"). This model works best with the Euler sampler (NOT Euler_a).
_Download the ckpt file from "files and versions" tab into the stable diffusion models folder of your web-ui of choice._
If you get too many yellow faces or you dont like the strong blue bias, simply put them in the negative prompt (e.g., "Yellow face, blue").
--
**Characters rendered with this model:**

_prompt and settings used: **lvngvncnt, [person], highly detailed** | **Steps: 25, Sampler: Euler, CFG scale: 6**_
--
**Landscapes/miscellaneous rendered with this model:**

_prompt and settings used: **lvngvncnt, [subject/setting], highly detailed** | **Steps: 25, Sampler: Euler, CFG scale: 6**_
--
This model was trained with Dreambooth, using TheLastBen colab notebook
--
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "dallinmackay/Van-Gogh-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "lvngvncnt, beautiful woman at sunset"
image = pipe(prompt).images[0]
image.save("./sunset.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
--
[](https://www.patreon.com/dallinmackay) |
timm/resnet152.a1_in1k | timm | 2024-02-10T23:40:06Z | 434 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2110.00476",
"arxiv:1512.03385",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-05T18:27:23Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for resnet152.a1_in1k
A ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* ResNet Strikes Back `A1` recipe
* LAMB optimizer with BCE loss
* Cosine LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 60.2
- GMACs: 11.6
- Activations (M): 22.6
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnet152.a1_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet152.a1_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet152.a1_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
|
digiplay/majicMIX_realistic_v4 | digiplay | 2023-09-26T06:35:55Z | 434 | 6 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-05-29T20:14:50Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/43331/majicmix-realistic
Sample image I made generated by huggingface's API :
 |
FredZhang7/one-for-all-toxicity-v3 | FredZhang7 | 2023-12-12T02:56:20Z | 434 | 8 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"nlp",
"moderation",
"ar",
"es",
"pa",
"th",
"et",
"fr",
"fi",
"hu",
"lt",
"ur",
"so",
"pl",
"el",
"mr",
"sk",
"gu",
"he",
"af",
"te",
"ro",
"lv",
"sv",
"ne",
"kn",
"it",
"mk",
"cs",
"en",
"de",
"da",
"ta",
"bn",
"pt",
"sq",
"tl",
"uk",
"bg",
"ca",
"sw",
"hi",
"zh",
"ja",
"hr",
"ru",
"vi",
"id",
"sl",
"cy",
"ko",
"nl",
"ml",
"tr",
"fa",
"no",
"multilingual",
"dataset:FredZhang7/toxi-text-3M",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-29T06:03:27Z | ---
license: cc-by-4.0
datasets:
- FredZhang7/toxi-text-3M
pipeline_tag: text-classification
language:
- ar
- es
- pa
- th
- et
- fr
- fi
- hu
- lt
- ur
- so
- pl
- el
- mr
- sk
- gu
- he
- af
- te
- ro
- lv
- sv
- ne
- kn
- it
- mk
- cs
- en
- de
- da
- ta
- bn
- pt
- sq
- tl
- uk
- bg
- ca
- sw
- hi
- zh
- ja
- hr
- ru
- vi
- id
- sl
- cy
- ko
- nl
- ml
- tr
- fa
- 'no'
- multilingual
tags:
- nlp
- moderation
---
[Link to the distilbert spam defender](https://huggingface.co/FredZhang7/distilbert-spam-defender)
Find the v1 (TensorFlow) model in SavedModel format on [this page](https://github.com/FredZhang7/tfjs-node-tiny/releases/tag/text-classification).
The license for the v1 model is Apache 2.0
<br>
| | v3 | v1 |
|----------|----------|----------|
| Base Model | bert-base-multilingual-cased | nlpaueb/legal-bert-small-uncased |
| Base Tokenizer | bert-base-multilingual-cased | bert-base-multilingual-cased |
| Framework | PyTorch | TensorFlow |
| Dataset Size | 3.0M | 2.68M |
| Train Split | 80% English<br>20% English + 100% Multilingual | None |
| English Train Accuracy | 99.5% | N/A (≈97.5%) |
| Other Train Accuracy | 98.6% | 96.6% |
| Final Val Accuracy | 96.8% | 94.6% |
| Languages | 55 | N/A (≈35) |
| Hyperparameters | maxlen=208<br>padding='max_length'<br>batch_size=112<br>optimizer=AdamW<br>learning_rate=1e-5<br>loss=BCEWithLogitsLoss() | maxlen=192<br>padding='max_length'<br>batch_size=16<br>optimizer=Adam<br>learning_rate=1e-5<br>loss="binary_crossentropy" |
| Training Stopped | 7/20/2023 | 9/05/2022 |
<br>
I manually annotated more data on top of Toxi Text 3M and added them to the training set.
Training on Toxi Text 3M alone results in a biased model that classifies short text with lower precision.
<br>
Models tested for v2: roberta, xlm-roberta, bert-small, bert-base-cased/uncased, bert-multilingual-cased/uncased, and alberta-large-v2.
Of these, I chose bert-multilingual-cased because it performs better with the same amount of resources as the others for this particular task.
<br>
## PyTorch
```python
text = "hello world!"
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("FredZhang7/one-for-all-toxicity-v3")
model = AutoModelForSequenceClassification.from_pretrained("FredZhang7/one-for-all-toxicity-v3").to(device)
encoding = tokenizer.encode_plus(
text,
add_special_tokens=True,
max_length=208,
padding="max_length",
truncation=True,
return_tensors="pt"
)
print('device:', device)
input_ids = encoding["input_ids"].to(device)
attention_mask = encoding["attention_mask"].to(device)
with torch.no_grad():
outputs = model(input_ids, attention_mask=attention_mask)
logits = outputs.logits
predicted_labels = torch.argmax(logits, dim=1)
print(predicted_labels)
```
## Attribution
- If you distribute, remix, adapt, or build upon One-for-all Toxicity v3, please credit "AIstrova Technologies Inc." in your README.md, application description, research, or website. |
BleachNick/MMICL-Instructblip-T5-xxl | BleachNick | 2023-09-20T08:25:28Z | 434 | 11 | transformers | [
"transformers",
"pytorch",
"instructblip",
"text2text-generation",
"en",
"arxiv:2309.07915",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-07-31T07:00:56Z | ---
license: mit
language:
- en
library_name: transformers
---
# Model Card for MMICL
# News 🚀
1. [09-19] We have converted the MMICL demo to a permanent link: [Demo for MMICL](http://www.testmmicl.work). The Vicuna version of MMICL and Chat Mode are presently under development, so they may require careful adjustment of generation parameters and may not work correctly.
2. [09-15] Our [paper](https://arxiv.org/abs/2309.07915) has been uploaded to arXiv.
3. [09-01] The [MIC](https://huggingface.co/datasets/BleachNick/MIC_full) data has released on the huggingface hub.
4. [08-23] Reach the 1st on [MME](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation), 1st on [MMBench](https://opencompass.org.cn/leaderboard-multimodal)
5. [08-21] The [MMICL-FLANT5XXL](https://huggingface.co/BleachNick/MMICL-Instructblip-T5-xxl) and [MMICL-Tiny](https://huggingface.co/BleachNick/MMICL-Instructblip-T5-xl) model has released on the huggingface hub.
## Temporal Demo for MMICL
[Playground for MMICL-FLANT5XXL](http://www.testmmicl.work/)
support multi-image input as well as video input.
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
**MMICL(Multi-Modal In-Context Learning)** is a multimodal vision-language model that incorporates blip2/instrcutblip.
It has the ability to analyze and understand multiple images, as well as follow instructions.
### Model Description
MMICL outperforms the VL model of the same size and performs exceptionally well on complex visual reasoning datasets.
Till 21st Aug. 2023, it achieves **state-of-the-art** performance on both multimodal task leaderboards and a wide range of vision-language tasks.
Furthermore, it showcases new capabilities in video understanding and multimodal in-context learning (M-ICL).
+ **Capability of multiple images refering and reasoning**
+ **Manually constructed In-context instruction tuning dataset**
+ Till 21st Aug. 2023 **1st on [MME](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation), 1st on [MMBench](https://opencompass.org.cn/leaderboard-multimodal)**
+ Visual Encoder: VIT-L from CLIP/ ViT-G/14 from EVA-CLIP
+ Pre-trained LLM: FlanT5-XL/ FlanT5-XXL/ Vicuna-7B/ Vicuna-13B
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **License:** MIT
- **Finetuned from model :** [instructblip-flan-t5-xxl](https://huggingface.co/Salesforce/instructblip-flan-t5-xxl)
<!-- Provide the basic links for the model. -->
- **Repository:** [MMICL](https://github.com/HaozheZhao/MIC)
## How to Get Started with the Model
the images are shown in our github repo [MMICL](https://github.com/HaozheZhao/MIC)
```
# For T5 based model
from model.instructblip import InstructBlipConfig, InstructBlipModel, InstructBlipPreTrainedModel,InstructBlipForConditionalGeneration,InstructBlipProcessor
import datasets
import json
import transformers
from PIL import Image
import torch
model_type="instructblip"
model_ckpt="BleachNick/MMICL-Instructblip-T5-xxl"
processor_ckpt = "Salesforce/instructblip-flan-t5-xxl"
config = InstructBlipConfig.from_pretrained(model_ckpt )
if 'instructblip' in model_type:
model = InstructBlipForConditionalGeneration.from_pretrained(
model_ckpt,
config=config).to('cuda:0',dtype=torch.bfloat16)
image_palceholder="图"
sp = [image_palceholder]+[f"<image{i}>" for i in range(20)]
processor = InstructBlipProcessor.from_pretrained(
processor_ckpt
)
sp = sp+processor.tokenizer.additional_special_tokens[len(sp):]
processor.tokenizer.add_special_tokens({'additional_special_tokens':sp})
if model.qformer.embeddings.word_embeddings.weight.shape[0] != len(processor.qformer_tokenizer):
model.qformer.resize_token_embeddings(len(processor.qformer_tokenizer))
replace_token="".join(32*[image_palceholder])
image = Image.open ("images/cal_num1.png")
image1 = Image.open ("images/cal_num2.png")
image2 = Image.open ("images/cal_num3.png")
images = [image,image1,image2]
prompt = [f'Use the image 0: <image0>{replace_token},image 1: <image1>{replace_token} and image 2: <image2>{replace_token} as a visual aid to help you calculate the equation accurately. image 0 is 2+1=3.\nimage 1 is 5+6=11.\nimage 2 is"']
prompt = " ".join(prompt)
inputs = processor(images=images, text=prompt, return_tensors="pt")
inputs['pixel_values'] = inputs['pixel_values'].to(torch.bfloat16)
inputs['img_mask'] = torch.tensor([[1 for i in range(len(images))]])
inputs['pixel_values'] = inputs['pixel_values'].unsqueeze(0)
inputs = inputs.to('cuda:0')
outputs = model.generate(
pixel_values = inputs['pixel_values'],
input_ids = inputs['input_ids'],
attention_mask = inputs['attention_mask'],
img_mask = inputs['img_mask'],
do_sample=False,
max_length=50,
min_length=1,
set_min_padding_size =False,
)
generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip()
print(generated_text)
# output: 3x6=18"
```
####
Training Hyperparameters
- **Training regime:** [fp32, bf16 mixed precision, bf16 non-mixed precision] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
facebook/mms-tts-tam | facebook | 2024-02-19T14:10:27Z | 434 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| text-to-speech | 2023-09-01T10:44:07Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Tamil Text-to-Speech
This repository contains the **Tamil (tam)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-tam")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-tam")
text = "some example text in the Tamil language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output.numpy())
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output.numpy(), rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
jondurbin/spicyboros-13b-2.2-gguf | jondurbin | 2023-09-13T18:07:14Z | 434 | 6 | null | [
"gguf",
"not-for-all-audiences",
"license:llama2",
"region:us"
]
| null | 2023-09-10T08:31:08Z | ---
license: llama2
tags:
- not-for-all-audiences
---
GGUF of https://hf.co/jondurbin/spicyboros-13b-2.2 |
maddes8cht/mosaicml-mpt-7b-instruct-gguf | maddes8cht | 2023-11-01T15:36:54Z | 434 | 1 | null | [
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"region:us"
]
| null | 2023-10-26T22:12:33Z | ---
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# mpt-7b-instruct - GGUF
- Model creator: [mosaicml](https://huggingface.co/mosaicml)
- Original model: [mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct)
MPT-7b and MPT-30B are part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# MPT-7B-Instruct
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted in the dolly-15k format:
```python
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering."
fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-40GBs for about 2.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
TheBloke/XwinCoder-34B-GGUF | TheBloke | 2023-11-19T19:10:18Z | 434 | 10 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:Xwin-LM/XwinCoder-34B",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-11-19T17:23:34Z | ---
base_model: Xwin-LM/XwinCoder-34B
inference: false
license: llama2
model_creator: Xwin-LM
model_name: XwinCoder 34B
model_type: llama
prompt_template: "<system>: You are an AI coding assistant that helps people with\
\ programming. Write a response that appropriately completes the user's request.\n\
<user>: {prompt}\n<AI>: \n"
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# XwinCoder 34B - GGUF
- Model creator: [Xwin-LM](https://huggingface.co/Xwin-LM)
- Original model: [XwinCoder 34B](https://huggingface.co/Xwin-LM/XwinCoder-34B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Xwin-LM's XwinCoder 34B](https://huggingface.co/Xwin-LM/XwinCoder-34B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/XwinCoder-34B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/XwinCoder-34B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF)
* [Xwin-LM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Xwin-LM/XwinCoder-34B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: XWin-Coder
```
<system>: You are an AI coding assistant that helps people with programming. Write a response that appropriately completes the user's request.
<user>: {prompt}
<AI>:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [xwincoder-34b.Q2_K.gguf](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF/blob/main/xwincoder-34b.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes |
| [xwincoder-34b.Q3_K_S.gguf](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF/blob/main/xwincoder-34b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss |
| [xwincoder-34b.Q3_K_M.gguf](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF/blob/main/xwincoder-34b.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss |
| [xwincoder-34b.Q3_K_L.gguf](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF/blob/main/xwincoder-34b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss |
| [xwincoder-34b.Q4_0.gguf](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF/blob/main/xwincoder-34b.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [xwincoder-34b.Q4_K_S.gguf](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF/blob/main/xwincoder-34b.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss |
| [xwincoder-34b.Q4_K_M.gguf](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF/blob/main/xwincoder-34b.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended |
| [xwincoder-34b.Q5_0.gguf](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF/blob/main/xwincoder-34b.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [xwincoder-34b.Q5_K_S.gguf](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF/blob/main/xwincoder-34b.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
| [xwincoder-34b.Q5_K_M.gguf](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF/blob/main/xwincoder-34b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended |
| [xwincoder-34b.Q6_K.gguf](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF/blob/main/xwincoder-34b.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss |
| [xwincoder-34b.Q8_0.gguf](https://huggingface.co/TheBloke/XwinCoder-34B-GGUF/blob/main/xwincoder-34b.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/XwinCoder-34B-GGUF and below it, a specific filename to download, such as: xwincoder-34b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/XwinCoder-34B-GGUF xwincoder-34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/XwinCoder-34B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/XwinCoder-34B-GGUF xwincoder-34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m xwincoder-34b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<system>: You are an AI coding assistant that helps people with programming. Write a response that appropriately completes the user's request.\n<user>: {prompt}\n<AI>:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/XwinCoder-34B-GGUF", model_file="xwincoder-34b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Xwin-LM's XwinCoder 34B
# XwinCoder
We are glad to introduce our instruction finetuned code generation models based on CodeLLaMA: XwinCoder. We release model weights and evaluation code.
**Repository:** [https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Coder](https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Coder)
**Models:**
| Model | 🤗hf link | HumanEval pass@1 | MBPP pass@1 | APPS-intro pass@5 |
|-------|------------|----------|------|-------------|
| XwinCoder-7B | [link](https://huggingface.co/Xwin-LM/XwinCoder-7B) | 63.8 | 57.4 | 31.5 |
| XwinCoder-13B | [link](https://huggingface.co/Xwin-LM/XwinCoder-13B) | 68.8 | 60.1 | 35.4 |
| XwinCoder-34B | [link](https://huggingface.co/Xwin-LM/XwinCoder-34B) | 74.2 | 64.8 | 43.0 |
## Updates
- 💥 We released [**XwinCoder-7B**](https://huggingface.co/Xwin-LM/XwinCoder-7B), [**XwinCoder-13B**](https://huggingface.co/Xwin-LM/XwinCoder-13B), [**XwinCoder-34B**](https://huggingface.co/Xwin-LM/XwinCoder-34B). Our XwinCoder-34B reached 74.2 on HumanEval and it **achieves comparable performance as GPT-3.5-turbo on 6 benchmarks**.
- We support evaluating instruction finetuned models on HumanEval, MBPP, APPS, DS1000 and MT-Bench. See our github repository.
## Overview

* To fully demonstrate our model's coding capabilities in real-world usage scenarios, we have conducted thorough evaluations on several existing mainstream coding capability leaderboards (rather than only on the currently most popular HumanEval).
* As shown in the radar chart results, our 34B model **achieves comparable performance as GPT-3.5-turbo on coding abilities**.
* It is worth mentioning that, to ensure accurate visualization, our radar chart has not been scaled (only translated; MT-Bench score is scaled by 10x to be more comparable with other benchmarks).
* Multiple-E-avg6 refer to the 6 languages used in CodeLLaMA paper. Results of GPT-4 and GPT-3.5-turbo are conducted by us, more details will be released later.
## Demo
We provide a chat demo in our github repository, here are some examples:

<!-- original-model-card end -->
|
holwech/Depth-Anything-Controlnet | holwech | 2024-06-04T10:59:43Z | 434 | 1 | diffusers | [
"diffusers",
"safetensors",
"region:us"
]
| null | 2024-01-22T16:57:44Z | Entry not found |
ChrisWilson011016/5DyAH24Z2tbYM1kcpzwmQs5zHGmp4i2SVQuA6rSyf67VMJ4N_vgg | ChrisWilson011016 | 2024-03-04T19:01:16Z | 434 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-29T12:59:31Z | Entry not found |
DenisTheDev/Blitz-AI-ULTRA | DenisTheDev | 2024-03-26T09:59:22Z | 434 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-26T06:50:08Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
---

# Blitz-AI-ULTRA
Our current largest model and so far the strongest.
# Why this model?
The mode was created as a custom AI solution for us. This one is not yet trained for specific use cases.
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
Hardware Type: [More Information Needed]
Hours used: [More Information Needed]
Cloud Provider: [More Information Needed]
Compute Region: [More Information Needed]
Carbon Emitted: [More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed] |
mradermacher/OverBloom-7b-GGUF | mradermacher | 2024-05-06T05:21:29Z | 434 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:nobita3921/OverBloom-7b",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-03T03:57:02Z | ---
base_model: nobita3921/OverBloom-7b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/nobita3921/OverBloom-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q5_K_M.gguf) | Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OverBloom-7b-GGUF/resolve/main/OverBloom-7b.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
duyntnet/Nous-Hermes-2-Yi-34B-imatrix-GGUF | duyntnet | 2024-04-28T04:38:34Z | 434 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Nous-Hermes-2-Yi-34B",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-04-27T09:31:39Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Nous-Hermes-2-Yi-34B
---
Quantizations of https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B
# From original readme
# Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

|
LiteLLMs/Llama3-OpenBioLLM-8B-GGUF | LiteLLMs | 2024-04-29T17:55:24Z | 434 | 0 | null | [
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"GGUF",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
]
| null | 2024-04-29T16:51:27Z |
---
language:
- en
license: llama3
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
- GGUF
base_model: meta-llama/Meta-Llama-3-8B
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: You are an expert and experienced from the healthcare and biomedical
domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: 'Newborn jaundice, also known as neonatal jaundice, is a common condition
in newborns where the yellowing of the skin and eyes occurs due to an elevated
level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when
red blood cells break down. In most cases, newborn jaundice resolves on its
own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such
as the underlying cause, gestational age at birth, and individual variations
in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice
and usually appears within 24-72 hours after birth. It tends to peak between
the second and fifth day of life and gradually improves over the next week or
two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and
may appear later than physiological jaundice, typically between the fifth and
fourteenth day of life. It tends to persist for a longer duration but usually
resolves within six weeks after birth. 3. Pathological jaundice: This type of
jaundice is less common and occurs due to an underlying medical condition that
affects bilirubin metabolism or liver function. The duration of pathological
jaundice depends on the specific cause and may require treatment.
It''s important for parents to monitor their newborn''s jaundice closely and
seek medical advice if the jaundice progresses rapidly, becomes severe, or is
accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness.
In these cases, further evaluation and management may be necessary. Remember
that each baby is unique, and the timing of jaundice resolution can vary. If
you have concerns about your newborn''s jaundice, it''s always best to consult
with a healthcare professional for personalized advice and guidance.'
model-index:
- name: OpenBioLLM-8B
results: []
quantized_by: andrijdavid
---
# Llama3-OpenBioLLM-8B-GGUF
- Original model: [Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Llama3-OpenBioLLM-8B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Llama3-OpenBioLLM-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Llama3-OpenBioLLM-8B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama3-OpenBioLLM-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Llama3-OpenBioLLM-8B
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
| | - | | - | |
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
<!-- original-model-card end -->
|
Niggendar/colorburstXL_ponyV10 | Niggendar | 2024-05-04T08:43:03Z | 434 | 3 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-05-04T08:39:13Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
scb10x/typhoon-v1.5-72b-instruct | scb10x | 2024-06-03T15:38:25Z | 434 | 4 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"th",
"en",
"arxiv:2303.08774",
"arxiv:2312.13951",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-06T06:14:22Z | ---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE
language:
- th
- en
pipeline_tag: text-generation
---
**Typhoon-v1.5-72B-instruct: Thai Large Language Model (Instruct)**
**Typhoon-v1.5-72B-instruct** is a *instruct* Thai 🇹🇭 large language model with 72 billion parameters, and it is based on Qwen1.5-72B.
**We also have a newer release of 1.5x 70B, which is better for application use cases.** [here](https://huggingface.co/scb10x/llama-3-typhoon-v1.5x-70b-instruct)

For release post, please see our [blog](https://blog.opentyphoon.ai/typhoon-1-5-release-a9364cb8e8d7).
## **Model Description**
- **Model type**: A 72B instruct decoder-only model based on Qwen1.5 archtecture.
- **Requirement**: transformers 4.38.0 or newer.
- **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
- **License**: [Qwen License](https://huggingface.co/Qwen/Qwen1.5-72B/raw/main/LICENSE)
## **Performance**
| Model | ONET | IC | TGAT | TPAT-1 | A-Level | Average (ThaiExam) | M3Exam | MMLU |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Typhoon-1.5 72B | 0.562 | **0.716** | **0.778** | 0.5 | **0.528** | **0.6168** | 0.587 | 0.7271 |
| OpenThaiGPT 1.0.0 70B | 0.447 | 0.492 | **0.778** | 0.5 | 0.319 | 0.5072 | 0.493 | 0.6167 |
| GPT-3.5-turbo(01-2024) | 0.358 | 0.279 | 0.678 | 0.345 | 0.318 | 0.3956 | 0.316 | 0.700** |
| GPT-4(04-2024) | **0.589** | 0.594 | 0.756 | **0.517** | 0.616 | 0.6144 | **0.626** | **0.864**** |
** We report the MMLU score that is reported in [GPT-4 Tech Report](https://arxiv.org/abs/2303.08774).
## Usage Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "scb10x/typhoon-v1.5-72b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
) # We do not recommend loading the model using 4-bit and 8-bit BNB as it may produce inaccurate results.
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "ขอสูตรไก่ย่าง"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=1024,
do_sample=True,
temperature=0.6,
top_p=0.9,
repetition_penalty=1.15
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Chat Template
We use chatml chat-template.
```python
{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content']}}{% if (loop.last and add_generation_prompt) or not loop.last %}{{ '<|im_end|>' + '\n'}}{% endif %}{% endfor %}
{% if add_generation_prompt and messages[-1]['role'] != 'assistant' %}{{ '<|im_start|>assistant\n' }}{% endif %}
```
## **Intended Uses & Limitations**
This model is an instructional model. However, it’s still undergoing development. It incorporates some level of guardrails, but it still may produce answers that are inaccurate, biased, or otherwise objectionable in response to user prompts. We recommend that developers assess these risks in the context of their use case.
## **Follow us**
**https://twitter.com/opentyphoon**
## **Support / Ask any question**
**https://discord.gg/CqyBscMFpg**
## **SCB10X AI Team**
- Kunat Pipatanakul, Potsawee Manakul, Sittipong Sripaisarnmongkol, Natapong Nitarach, Pathomporn Chokchainant, Kasima Tharnpipitchai
- If you find Typhoon v1.5 useful for your work, please cite it using:
```
@article{pipatanakul2023typhoon,
title={Typhoon: Thai Large Language Models},
author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
year={2023},
journal={arXiv preprint arXiv:2312.13951},
url={https://arxiv.org/abs/2312.13951}
}
```
## **Contact Us**
- General & Collaboration: **[[email protected]](mailto:[email protected])**, **[[email protected]](mailto:[email protected])**
- Technical: **[[email protected]](mailto:[email protected])** |
RichardErkhov/bigcode_-_starcoder2-7b-gguf | RichardErkhov | 2024-05-10T00:28:06Z | 434 | 1 | null | [
"gguf",
"arxiv:2305.13245",
"arxiv:2205.14135",
"arxiv:2004.05150",
"arxiv:2207.14255",
"arxiv:2402.19173",
"region:us"
]
| null | 2024-05-09T22:08:54Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
starcoder2-7b - GGUF
- Model creator: https://huggingface.co/bigcode/
- Original model: https://huggingface.co/bigcode/starcoder2-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [starcoder2-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q2_K.gguf) | Q2_K | 2.64GB |
| [starcoder2-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.IQ3_XS.gguf) | IQ3_XS | 2.85GB |
| [starcoder2-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.IQ3_S.gguf) | IQ3_S | 2.97GB |
| [starcoder2-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q3_K_S.gguf) | Q3_K_S | 2.96GB |
| [starcoder2-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.IQ3_M.gguf) | IQ3_M | 3.1GB |
| [starcoder2-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q3_K.gguf) | Q3_K | 3.41GB |
| [starcoder2-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q3_K_M.gguf) | Q3_K_M | 3.41GB |
| [starcoder2-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q3_K_L.gguf) | Q3_K_L | 3.79GB |
| [starcoder2-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.IQ4_XS.gguf) | IQ4_XS | 3.68GB |
| [starcoder2-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q4_0.gguf) | Q4_0 | 3.82GB |
| [starcoder2-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [starcoder2-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [starcoder2-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q4_K.gguf) | Q4_K | 4.15GB |
| [starcoder2-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q4_K_M.gguf) | Q4_K_M | 4.15GB |
| [starcoder2-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q4_1.gguf) | Q4_1 | 4.22GB |
| [starcoder2-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q5_0.gguf) | Q5_0 | 4.63GB |
| [starcoder2-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q5_K_S.gguf) | Q5_K_S | 4.63GB |
| [starcoder2-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q5_K.gguf) | Q5_K | 4.8GB |
| [starcoder2-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q5_K_M.gguf) | Q5_K_M | 4.8GB |
| [starcoder2-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q5_1.gguf) | Q5_1 | 5.03GB |
| [starcoder2-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-7b-gguf/blob/main/starcoder2-7b.Q6_K.gguf) | Q6_K | 5.49GB |
Original model description:
---
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.2
top_p: 0.95
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
datasets:
- bigcode/the-stack-v2-train
license: bigcode-openrail-m
library_name: transformers
tags:
- code
model-index:
- name: starcoder2-7b
results:
- task:
type: text-generation
dataset:
name: CruxEval-I
type: cruxeval-i
metrics:
- type: pass@1
value: 34.6
- task:
type: text-generation
dataset:
name: DS-1000
type: ds-1000
metrics:
- type: pass@1
value: 27.8
- task:
type: text-generation
dataset:
name: GSM8K (PAL)
type: gsm8k-pal
metrics:
- type: accuracy
value: 40.4
- task:
type: text-generation
dataset:
name: HumanEval+
type: humanevalplus
metrics:
- type: pass@1
value: 29.9
- task:
type: text-generation
dataset:
name: HumanEval
type: humaneval
metrics:
- type: pass@1
value: 35.4
- task:
type: text-generation
dataset:
name: RepoBench-v1.1
type: repobench-v1.1
metrics:
- type: edit-smiliarity
value: 72.07
---
# StarCoder2
<center>
<img src="https://huggingface.co/datasets/bigcode/admin_private/resolve/main/starcoder2_banner.png" alt="SC2" width="900" height="600">
</center>
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
StarCoder2-7B model is a 7B parameter model trained on 17 programming languages from [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train), with opt-out requests excluded. The model uses [Grouped Query Attention](https://arxiv.org/abs/2305.13245), [a context window of 16,384 tokens](https://arxiv.org/abs/2205.14135) with [a sliding window attention of 4,096 tokens](https://arxiv.org/abs/2004.05150v2), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 3.5+ trillion tokens.
- **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
- **Paper:** [Link](https://huggingface.co/papers/2402.19173)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Languages:** 17 Programming languages
## Use
### Intended use
The model was trained on GitHub code as well as additional selected data sources such as Arxiv and Wikipedia. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well.
### Generation
Here are some examples to get started with the model. You can find a script for fine-tuning in StarCoder2's [GitHub repository](https://github.com/bigcode-project/starcoder2).
First, make sure to install `transformers` from source:
```bash
pip install git+https://github.com/huggingface/transformers.git
```
#### Running the model on CPU/GPU/multi GPU
* _Using full precision_
```python
# pip install git+https://github.com/huggingface/transformers.git # TODO: merge PR to main
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoder2-7b"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
```bash
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
Memory footprint: 29232.57 MB
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
checkpoint = "bigcode/starcoder2-7b"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for fp16 use `torch_dtype=torch.float16` instead
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
```bash
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
Memory footprint: 14616.29 MB
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
# to use 4bit use `load_in_4bit=True` instead
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
checkpoint = "bigcode/starcoder2-7b"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, quantization_config=quantization_config)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
```bash
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
# load_in_8bit
Memory footprint: 7670.52 MB
# load_in_4bit
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
Memory footprint: 4197.64 MB
```
### Attribution & Other Requirements
The pretraining dataset of the model was filtered for permissive licenses and code with no license only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/search-v2) that lets you search through the pretraining data to identify where the generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on source code from 17 programming languages. The predominant language in source is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient and contain bugs or exploits. See [the paper](https://huggingface.co/papers/2402.19173) for an in-depth discussion of the model limitations.
# Training
## Model
- **Architecture:** Transformer decoder with grouped-query and sliding window attention and Fill-in-the-Middle objective
- **Pretraining steps:** 1 million
- **Pretraining tokens:** 3.5+ trillion
- **Precision:** bfloat16
## Hardware
- **GPUs:** 432 H100
## Software
- **Framework:** [nanotron](https://github.com/huggingface/nanotron/)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
# Citation
```bash
@misc{lozhkov2024starcoder,
title={StarCoder 2 and The Stack v2: The Next Generation},
author={Anton Lozhkov and Raymond Li and Loubna Ben Allal and Federico Cassano and Joel Lamy-Poirier and Nouamane Tazi and Ao Tang and Dmytro Pykhtar and Jiawei Liu and Yuxiang Wei and Tianyang Liu and Max Tian and Denis Kocetkov and Arthur Zucker and Younes Belkada and Zijian Wang and Qian Liu and Dmitry Abulkhanov and Indraneil Paul and Zhuang Li and Wen-Ding Li and Megan Risdal and Jia Li and Jian Zhu and Terry Yue Zhuo and Evgenii Zheltonozhskii and Nii Osae Osae Dade and Wenhao Yu and Lucas Krauß and Naman Jain and Yixuan Su and Xuanli He and Manan Dey and Edoardo Abati and Yekun Chai and Niklas Muennighoff and Xiangru Tang and Muhtasham Oblokulov and Christopher Akiki and Marc Marone and Chenghao Mou and Mayank Mishra and Alex Gu and Binyuan Hui and Tri Dao and Armel Zebaze and Olivier Dehaene and Nicolas Patry and Canwen Xu and Julian McAuley and Han Hu and Torsten Scholak and Sebastien Paquet and Jennifer Robinson and Carolyn Jane Anderson and Nicolas Chapados and Mostofa Patwary and Nima Tajbakhsh and Yacine Jernite and Carlos Muñoz Ferrandis and Lingming Zhang and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
year={2024},
eprint={2402.19173},
archivePrefix={arXiv},
primaryClass={cs.SE}
}
```
|
BeegolAI/lora_model_llama-3_beegol_q4_k_m-gguf | BeegolAI | 2024-05-17T14:57:48Z | 434 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-13T17:54:07Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** BeegolAI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jacobhoffmann/Mistral-7B-TestGen-Dart_v0.6-GGUF | jacobhoffmann | 2024-05-23T18:54:35Z | 434 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-23T17:53:44Z | Entry not found |
KnutJaegersberg/Qwen2-Deita-500m | KnutJaegersberg | 2024-06-06T17:13:08Z | 434 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-24T05:45:39Z | ---
license: apache-2.0
license_name: qwen
license_link: LICENSE
---
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### User:
How do you fine tune a large language model?
### Assistant:
``` |
RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf | RichardErkhov | 2024-05-26T14:42:46Z | 434 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-26T12:32:29Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-7b-chat-hf - GGUF
- Model creator: https://huggingface.co/4bit/
- Original model: https://huggingface.co/4bit/Llama-2-7b-chat-hf/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-2-7b-chat-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q2_K.gguf) | Q2_K | 2.36GB |
| [Llama-2-7b-chat-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Llama-2-7b-chat-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Llama-2-7b-chat-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Llama-2-7b-chat-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Llama-2-7b-chat-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q3_K.gguf) | Q3_K | 3.07GB |
| [Llama-2-7b-chat-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Llama-2-7b-chat-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Llama-2-7b-chat-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Llama-2-7b-chat-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Llama-2-7b-chat-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Llama-2-7b-chat-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Llama-2-7b-chat-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q4_K.gguf) | Q4_K | 3.8GB |
| [Llama-2-7b-chat-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Llama-2-7b-chat-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Llama-2-7b-chat-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Llama-2-7b-chat-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Llama-2-7b-chat-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q5_K.gguf) | Q5_K | 4.45GB |
| [Llama-2-7b-chat-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Llama-2-7b-chat-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Llama-2-7b-chat-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q6_K.gguf) | Q6_K | 5.15GB |
| [Llama-2-7b-chat-hf.Q8_0.gguf](https://huggingface.co/RichardErkhov/4bit_-_Llama-2-7b-chat-hf-gguf/blob/main/Llama-2-7b-chat-hf.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
IEITYuan/Yuan2-M32-gguf | IEITYuan | 2024-06-03T06:12:54Z | 434 | 6 | null | [
"gguf",
"arxiv:2405.17976",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-27T08:17:40Z | ---
license: apache-2.0
---
<div align="center">
<h1>
Yuan2.0-M32: Mixture of Experts with Attention Router
</h1>
</div>
<p align="center">
🌎 <a href="https://github.com/IEIT-Yuan/Yuan2.0-M32" target="_blank">GitHub</a> • 🤗 <a href="https://huggingface.co/IEITYuan" target="_blank">Hugging Face</a> • 💬 <a href="https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/images/%E6%BA%90%E5%85%AC%E4%BC%97%E5%8F%B7%E4%BA%8C%E7%BB%B4%E7%A0%81.png" target="_blank">WeChat</a>• 📎 <a href="https://arxiv.org/abs/2405.17976" target="_blank">Yuan2.0-M32 Paper</a>
</p>
<p align="center">
🚀🚀 <a href="https://github.com/IEIT-Yuan/3rd_party/tree/main/llama-cpp" target="_blank">llama.cpp for Yuan2.0 LLMs</a>
</p>
<div align="center">
<a href="code_license">
<img alt="Code License" src="https://img.shields.io/badge/Apache%202.0%20-green?style=flat&label=Code%20License&link=https%3A%2F%2Fgithub.com%2FIEIT-Yuan%2FYuan-2.0-MoE%3Ftab%3DApache-2.0-1-ov-file"/>
</a>
<a href="model_license">
<img alt="Model License" src="https://img.shields.io/badge/Yuan2.0%20License-blue?style=flat&logoColor=blue&label=Model%20License&color=blue&link=https%3A%2F%2Fgithub.com%2FIEIT-Yuan%2FYuan-2.0%2Fblob%2Fmain%2FLICENSE-Yuan" />
</a>
</div>
-----
## 1. Introduction
**Yuan2.0-M32** is a Mixture-of-Experts (MoE) language model with 32 experts, of which 2 are active. A new router network, Attention Router, is proposed and has been adopted for more efficient expert selection, boosting accuracy by 3.8% over models using a classical router network. Yuan 2.0-M32 is trained from scratch with 2000B tokens, and its training computation is only 9.25% of that required by a dense model of the same parameter scale. Demonstrating competitive capabilities in coding, math, and various specialized fields, Yuan2.0-M32 operates with only 3.7B active parameters out of a total 40B, and a forward computation of 7.4 GFLOPS per token, which is just 1/19th of Llama3-70B's requirement. Yuan 2.0-M32 has surpassed Llama3-70B on the MATH and ARC-Challenge benchmarks, achieving accuracies of 55.9% and 95.8%, respectively. The basic information of the **Yuan2.0-M32** model is as follows:
+ **Total Parameters :** 40B <br>
+ **Experts:** 32 <br>
+ **Active Experts:** 2 <br>
+ **Active Parameters:** 3.7B <br>
+ **Training Tokens:** 2000B tokens <br>
+ **Sequence Length:** 16K <br>
The technical report for the Yuan2.0-M32 model has been released, and you can find more detailed technical information and evaluation results by referring to the <a href="https://arxiv.org/abs/2405.17976" target="_blank">**paper**</a>.
## 2. Model Downloads
| Model | Sequence Length | Type | Download |
| :----------: | :------: | :-------: |:---------------------------: |
| Yuan2.0-M32 | 16K | Megatron | [HuggingFace](https://huggingface.co/IEITYuan/Yuan2-M32)
| Yuan2.0-M32-HF | 16K | HuggingFace | [HuggingFace](https://huggingface.co/IEITYuan/Yuan2-M32-hf)
| Yuan2.0-M32-GGUF | 16K | GGUF | [HuggingFace](https://huggingface.co/IEITYuan/Yuan2-M32-gguf)
| Yuan2.0-M32-GGUF-INT4 | 16K | GGUF | [HuggingFace](https://huggingface.co/IEITYuan/Yuan2-M32-gguf-int4)
## 3. Evaluation
**3.1 Benchmarks** 🏆
We conducted a thorough evaluation of the Yuan2.0-M32 model across a range of benchmarks, including HumanEval, GSM8K, MMLU, Math, and ARC-Challenge. These benchmarks are designed to test the model's proficiency in key areas such as natural language understanding, knowledge acquisition, mathematical computation and reasoning, and code generation. The Yuan2.0-M32 has shown a consistent and significant advantage over other models like Llama3-8B and Mistral-8×7B, excelling in all evaluated tasks. Remarkably, its overall performance is on par with the more substantial Llama3-70B model.The detailed evaluation results are outlined in the subsequent table.
| Model | HumanEval | GSM8K | MMLU | Math | ARC-C\* |
| ------------------ | :---------------: | :------------: | :---------------: | :---------------: | :---------------:|
| Llama3-70B | **81.7%** | **93%** | **80.3** | 50.4% | 93.3% |
| Llama3-8B | 62.2% | 79.6% | 68.4% | 30% | 78.6% |
| Phi-3-medium | 62.2% | 91.0% | 78.0% | - | 91.6% |
| Phi-3-small | 61% | 89.6% | 75.7% | - | 90.7% |
| Phi-3-mini | 58.5% | 82.5% | 68.8% | - | 84.9% |
| Mistral-8*22B | 45.1% | 78.6% | 77.8% | 41,8% | 91.3% |
| Mistral-8*7B | 40.2% | 58.4% | 70.86% | 28.4% | 85.9% |
| **Yuan2.0-M32** | 74.4% | 92.7% | 72.2% | **55.9%** | **95.8%** |
\* __*ARC-C*__: AI2 Reasoning Challenge (ARC) benchmark contains more complex parts that need further reasoning.
-----
**3.2 Computational Utilization for Model**
| Model | Params (B) | Active Params (B) | GFLOPs/token (Inference) | GFLOPS/token (Fine-tune) | Mean Accuracy | Average Accuracy/GFLOPSs per token (Inference) |
| ------------------ | :---------------: | :------------: | :---------------: | :---------------: | :---------------:|:---------------:|
| Llama3-70B | 70 | 70 | 140 | 420 | 79.25 | 0.57 |
| Llama3-8B | 8 | 8 | 16 | 48 | 64.15 | 4.00 |
| Mistral-8*22B | 141 | 39 | 78 | 234 | 72.38 | 0.93 |
| Mistral-8*7B | 47 | 12.9 | 25.8 | 77.3 | 60.83 | 2.36 |
| **Yuan2.0-M32** | 40 | 3.7 | 7.4 | 22.2 | 79.15 | 10.69 |
## 4. Quick Start
**4.1 Environment Config**
We strongly recommend using the latest release of docker images of Yuan2.0-M32.You can launch an instance of the Yuan 2.0 container with the following Docker commands:
```bash
docker pull yuanmodel/yuan2.0:m32
docker run --gpus all --privileged --ulimit stack=68719476736 --shm-size=1000G -itd -v /path/to/yuan_2.0:/workspace/yuan_2.0 -v /path/to/dataset:/workspace/dataset -v /path/to/checkpoints:/workspace/checkpoints --name your_name yuanmodel/yuan2.0:m32
docker exec -it your_name bash
```
**4.2 Data Preprocess**
We have provided the data preprocess script. See documentation [here](https://github.com/IEIT-Yuan/Yuan2.0-M32/blob/main/docs/data_process.md
).
**4.3 Model Pretrain**
We've provided several scripts for pretraining in the [`example`](https://github.com/IEIT-Yuan/Yuan2.0-M32/blob/main/examples). The details can be seen from documentation [here](https://github.com/IEIT-Yuan/Yuan2.0-M32/blob/main/docs/pretrain.md).
**4.4 Inference Service**
For a detailed deployment plan, please refer to [vllm](https://github.com/IEIT-Yuan/Yuan2.0-M32/edit/main/vllm/README_Yuan_vllm.md).
- For more information, please refer to [GitHub](https://github.com/IEIT-Yuan/Yuan2.0-M32) repository.
## 5. Statement of Agreement
The use of the source code in this repository requires compliance with the open source license agreement Apache 2.0. The Yuan2.0 model supports commercial use and does not require authorization. Please understand and comply with the [《Yuan2.0 Model License Agreement》](./LICENSE-Yuan). Do not use the open source model and code, as well as derivatives generated from open source projects, for any purposes that may cause harm to the country and society, or for any services that have not undergone security assessment and filing. Although we have taken measures to ensure the compliance and accuracy of the data during training, the model has a huge number of parameters and is affected by probability and randomness factors. We cannot guarantee the accuracy of the output content, and the model is easily misled by input instructions. This project does not assume any data security, public opinion risks, or any model misleading, abusing, spreading caused by open-source models and code Risks and responsibilities arising from improper utilization You will be solely responsible for the risks and consequences arising from the use, copying, distribution, and modification of the model in this open source project
## 6. Contact Us
**If you have any questions, please raise an issue or contact us at** [email protected] |
DavidAU/Dark-Forest-V1-Ultra-Quality-20b-GGUF | DavidAU | 2024-05-31T10:28:08Z | 434 | 2 | null | [
"gguf",
"story",
"roleplay",
"creative",
"rp",
"fantasy",
"story telling",
"32 bit upscale",
"ultra high precision",
"nsfw",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-30T12:47:09Z | ---
license: apache-2.0
language:
- en
tags:
- story
- roleplay
- creative
- rp
- fantasy
- story telling
- 32 bit upscale
- ultra high precision
- nsfw
---
<B> Ultra High Quality - 20B Dark Forest Version 1.0 - 32 bit upscale </b>
Fully rebuilt from master files, including full merge(s) to maintain full 32 bit precision right
up until it is compressed into GGUF files which results on a top to bottom upgrade.
The result is superior performance in instruction following, reasoning, depth, nuance and emotion.
<img src="dark-forest.jpg">
On average this means a q4km operates at Q6 levels and Q6 and Q8 exceeds original model full precision performance.
Perplexity drop (lower is better) is close to 10% (over 600 points for q4km) for all quants.
That means precision has been enhanced for all 20 billion parameters which affects "brain density" / "function",
instruction following and output quality.
Imatrix quants to follow shortly.
For more details, including a list of enhancements see our other 32 bit
upscale of "Space Whale 20B" rebuild here:
[ https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF ]
Special thanks to "TEEZEE" for making a fantasic model.
<b> Info from the original model card: </B>
Warning: This model can produce NSFW content!
Results
- produces SFW nad NSFW content without issues, switches context seamlessly.
- good at following instructions.
- good at tracking multiple characters in one scene.
- very creative, scenarios produced are mature and complicated, model doesn't shy from writing about PTSD, menatal issues or complicated relationships.
- NSFW output is more creative and suprising than typical limaRP output.
- definitely for mature audiences, not only because of vivid NSFW content but also because of overall maturity of stories it produces.
- This is NOT Harry Potter level storytelling.
For original model spec and information please visit:
[ https://huggingface.co/TeeZee/DarkForest-20B-v1.0 ] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.