modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
timm/pit_b_distilled_224.in1k | timm | 2023-04-26T00:07:39Z | 564 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.16302",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-26T00:06:44Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for pit_b_distilled_224.in1k
A PiT (Pooling based Vision Transformer) image classification model. Trained on ImageNet-1k with token based distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 74.8
- GMACs: 12.5
- Activations (M): 33.1
- Image size: 224 x 224
- **Papers:**
- Rethinking Spatial Dimensions of Vision Transformers: https://arxiv.org/abs/2103.16302
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/naver-ai/pit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('pit_b_distilled_224.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'pit_b_distilled_224.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 256, 31, 31])
# torch.Size([1, 512, 16, 16])
# torch.Size([1, 1024, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'pit_b_distilled_224.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{heo2021pit,
title={Rethinking Spatial Dimensions of Vision Transformers},
author={Byeongho Heo and Sangdoo Yun and Dongyoon Han and Sanghyuk Chun and Junsuk Choe and Seong Joon Oh},
booktitle = {International Conference on Computer Vision (ICCV)},
year={2021},
}
```
|
MariaK/vilt_finetuned_200 | MariaK | 2023-08-01T17:08:24Z | 564 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vilt",
"visual-question-answering",
"generated_from_trainer",
"dataset:vqa",
"base_model:dandelin/vilt-b32-mlm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| visual-question-answering | 2023-08-01T16:10:06Z | ---
license: apache-2.0
base_model: dandelin/vilt-b32-mlm
tags:
- generated_from_trainer
datasets:
- vqa
model-index:
- name: vilt_finetuned_200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vilt_finetuned_200
This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on the vqa dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
TheBloke/WizardLM-13B-V1.1-GGUF | TheBloke | 2023-09-27T12:52:57Z | 564 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"base_model:WizardLM/WizardLM-13B-V1.1",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-09-20T00:48:08Z | ---
license: other
model_name: WizardLM 13B V1.1
base_model: WizardLM/WizardLM-13B-V1.1
inference: false
model_creator: WizardLM
model_type: llama
prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# WizardLM 13B V1.1 - GGUF
- Model creator: [WizardLM](https://huggingface.co/WizardLM)
- Original model: [WizardLM 13B V1.1](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [WizardLM's WizardLM 13B V1.1](https://huggingface.co/WizardLM/WizardLM-13B-V1.1).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF)
* [WizardLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [wizardlm-13b-v1.1.Q2_K.gguf](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF/blob/main/wizardlm-13b-v1.1.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [wizardlm-13b-v1.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF/blob/main/wizardlm-13b-v1.1.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [wizardlm-13b-v1.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF/blob/main/wizardlm-13b-v1.1.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [wizardlm-13b-v1.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF/blob/main/wizardlm-13b-v1.1.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [wizardlm-13b-v1.1.Q4_0.gguf](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF/blob/main/wizardlm-13b-v1.1.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [wizardlm-13b-v1.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF/blob/main/wizardlm-13b-v1.1.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [wizardlm-13b-v1.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF/blob/main/wizardlm-13b-v1.1.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [wizardlm-13b-v1.1.Q5_0.gguf](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF/blob/main/wizardlm-13b-v1.1.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [wizardlm-13b-v1.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF/blob/main/wizardlm-13b-v1.1.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [wizardlm-13b-v1.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF/blob/main/wizardlm-13b-v1.1.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [wizardlm-13b-v1.1.Q6_K.gguf](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF/blob/main/wizardlm-13b-v1.1.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [wizardlm-13b-v1.1.Q8_0.gguf](https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGUF/blob/main/wizardlm-13b-v1.1.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/WizardLM-13B-V1.1-GGUF and below it, a specific filename to download, such as: wizardlm-13b-v1.1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/WizardLM-13B-V1.1-GGUF wizardlm-13b-v1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/WizardLM-13B-V1.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-13B-V1.1-GGUF wizardlm-13b-v1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m wizardlm-13b-v1.1.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardLM-13B-V1.1-GGUF", model_file="wizardlm-13b-v1.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: WizardLM's WizardLM 13B V1.1
This is the **Full-Weight** of WizardLM-13B V1.1 model.
## WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
<p align="center">
🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
| Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>WizardEval</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> | <sup>101.4% </sup>|<sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | <sup>99.3% </sup> |<sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | <sup>97.8% </sup> | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | <sup>89.1% </sup> |<sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | <sup>78.0% </sup> |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
</font>
**Repository**: https://github.com/nlpxucan/WizardLM
**Twitter**: https://twitter.com/WizardLM_AI/status/1677282955490918401
- 🔥🔥🔥 [7/7/2023] We released **WizardLM V1.1** models. The **WizardLM-13B-V1.1** is here ([Demo_13B-V1.1](https://e8a06366ccd1c4d1.gradio.app), [Demo_13B-V1.1_bak-1](https://59da107262a25764.gradio.app), [Demo_13B-V1.1_bak-2](https://dfc5113f66739c80.gradio.app), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)). **WizardLM-7B-V1.1**, **WizardLM-30B-V1.1**, and **WizardLM-65B-V1.1** are coming soon. Please checkout the [Full Model Weights](https://huggingface.co/WizardLM) and [paper](https://arxiv.org/abs/2304.12244).
- 🔥🔥🔥 [7/7/2023] The **WizardLM-13B-V1.1** achieves **6.74** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **86.32%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **99.3%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
## Inference WizardLM Demo Script
We provide the inference WizardLM demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
<!-- original-model-card end -->
|
PrunaAI/ChimeraLlama-3-8B-v2-GGUF-smashed | PrunaAI | 2024-05-02T23:57:58Z | 564 | 0 | null | [
"gguf",
"pruna-ai",
"region:us"
]
| null | 2024-05-02T21:54:56Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the mlabonne/ChimeraLlama-3-8B-v2 model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/ChimeraLlama-3-8B-v2-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/ChimeraLlama-3-8B-v2-GGUF-smashed ChimeraLlama-3-8B-v2.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/ChimeraLlama-3-8B-v2-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/ChimeraLlama-3-8B-v2-GGUF-smashed ChimeraLlama-3-8B-v2.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m ChimeraLlama-3-8B-v2.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./ChimeraLlama-3-8B-v2.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./ChimeraLlama-3-8B-v2.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
nvidia/parakeet-tdt_ctc-1.1b | nvidia | 2024-05-08T03:33:30Z | 564 | 10 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"Transducer",
"TDT",
"FastConformer",
"Conformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"dataset:fisher_corpus",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:National-Singapore-Corpus-Part-1",
"dataset:vctk",
"dataset:voxpopuli",
"dataset:europarl",
"dataset:multilingual_librispeech",
"arxiv:2305.05084",
"arxiv:2304.06795",
"arxiv:2104.02821",
"license:cc-by-4.0",
"model-index",
"region:us"
]
| automatic-speech-recognition | 2024-05-07T11:42:30Z | ---
language:
- en
library_name: nemo
datasets:
- librispeech_asr
- fisher_corpus
- mozilla-foundation/common_voice_8_0
- National-Singapore-Corpus-Part-1
- vctk
- voxpopuli
- europarl
- multilingual_librispeech
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- Transducer
- TDT
- FastConformer
- Conformer
- pytorch
- NeMo
- hf-asr-leaderboard
license: cc-by-4.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: parakeet_tdt_1.1b
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: AMI (Meetings test)
type: edinburghcstr/ami
config: ihm
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 15.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Earnings-22
type: revdotcom/earnings22
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 11.86
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: GigaSpeech
type: speechcolab/gigaspeech
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 10.19
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.82
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.67
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: SPGI Speech
type: kensho/spgispeech
config: test
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 2.24
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: tedlium-v3
type: LIUM/tedlium
config: release1
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.87
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Vox Populi
type: facebook/voxpopuli
config: en
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 6.19
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 9.0
type: mozilla-foundation/common_voice_9_0
config: en
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 8.69
metrics:
- wer
pipeline_tag: automatic-speech-recognition
---
# Parakeet TDT-CTC 1.1B PnC(en)
<style>
img {
display: inline;
}
</style>
[](#model-architecture)
| [](#model-architecture)
| [](#datasets)
`parakeet-hyb-pnc-1.1b` is an ASR model that transcribes speech with Punctuations and Capitalizations of English alphabet. This model is jointly developed by [NVIDIA NeMo](https://github.com/NVIDIA/NeMo) and [Suno.ai](https://www.suno.ai/) teams.
It is an XXL version of Hybrid FastConformer [1] TDT-CTC [2] (around 1.1B parameters) model. This model has been trained with Local Attention and Global token hence this model can transcribe **11 hrs** of audio in one single pass. And for reference this model can transcibe 90mins of audio in <16 sec on A100.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.ASRModel.from_pretrained(model_name="nvidia/parakeet-tdt_ctc-1.1b")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
By default model uses TDT to transcribe the audio files, to switch decoder to use CTC, use decoding_type='ctc'
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/parakeet-tdt_ctc-1.1b"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 Hz mono-channel audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
This model uses a Hybrid FastConformer-TDT-CTC architecture. FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer).
## Training
The NeMo toolkit [3] was used for finetuning this model for 20,000 steps over `parakeet-tdt-1.1` model. This model is trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_hybrid_transducer_ctc/speech_to_text_hybrid_rnnt_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/hybrid_transducer_ctc/fastconformer_hybrid_transducer_ctc_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
The model was trained on 36K hours of English speech collected and prepared by NVIDIA NeMo and Suno teams.
The training dataset consists of private subset with 27K hours of English speech plus 9k hours from the following public PnC datasets:
- Librispeech 960 hours of English speech
- Fisher Corpus
- National Speech Corpus Part 1
- VCTK
- VoxPopuli (EN)
- Europarl-ASR (EN)
- Multilingual Librispeech (MLS EN) - 2,000 hour subset
- Mozilla Common Voice (v7.0)
## Performance
The performance of Automatic Speech Recognition models is measuring using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general.
The following tables summarizes the performance of the available models in this collection with the Transducer decoder. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
|**Version**|**Tokenizer**|**Vocabulary Size**|**AMI**|**Earnings-22**|**Giga Speech**|**LS test-clean**|**SPGI Speech**|**TEDLIUM-v3**|**Vox Populi**|**Common Voice**|
|---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-----|-------|------|------|
| 1.23.0 | SentencePiece Unigram | 1024 | 15.94 | 11.86 | 10.19 | 1.82 | 3.67 | 2.24 | 3.87 | 6.19 | 8.69 |
These are greedy WER numbers without external LM. More details on evaluation can be found at [HuggingFace ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard)
## Model Fairness Evaluation
As outlined in the paper "Towards Measuring Fairness in AI: the Casual Conversations Dataset", we assessed the parakeet-tdt_ctc-1.1b model for fairness. The model was evaluated on the CausalConversations-v1 dataset, and the results are reported as follows:
### Gender Bias:
| Gender | Male | Female | N/A | Other |
| :--- | :--- | :--- | :--- | :--- |
| Num utterances | 19325 | 24532 | 926 | 33 |
| % WER | 12.81 | 10.49 | 13.88 | 23.12 |
### Age Bias:
| Age Group | (18-30) | (31-45) | (46-85) | (1-100) |
| :--- | :--- | :--- | :--- | :--- |
| Num utterances | 15956 | 14585 | 13349 | 43890 |
| % WER | 11.50 | 11.63 | 11.38 | 11.51 |
(Error rates for fairness evaluation are determined by normalizing both the reference and predicted text, similar to the methods used in the evaluations found at https://github.com/huggingface/open_asr_leaderboard.)
## NVIDIA Riva: Deployment
[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
[1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
[2] [Efficient Sequence Transduction by Jointly Predicting Tokens and Durations](https://arxiv.org/abs/2304.06795)
[3] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
[4] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
[5] [Suno.ai](https://suno.ai/)
[6] [HuggingFace ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard)
[7] [Towards Measuring Fairness in AI: the Casual Conversations Dataset](https://arxiv.org/abs/2104.02821)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. |
cognitivecomputations/dolphin-2.9.3-qwen2-1.5b | cognitivecomputations | 2024-06-14T01:17:55Z | 564 | 9 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"conversational",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:Qwen/Qwen2-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-10T15:16:01Z | ---
license: apache-2.0
base_model: Qwen/Qwen2-1.5B
tags:
- generated_from_trainer
- axolotl
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
---
# Dolphin 2.9.3 Qwen2 1.5B 🐬
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
Our appreciation for the sponsors of Dolphin 2.9.3:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node
This model is based on Qwen2-1.5b, and is governed by the Apache-2.0
The base model has 128k context, and the full-weight fine-tuning was with 16k sequence length.
Due to the complexities of fine tuning smaller models on datasets created by/for larger models - we removed coding, function calling and systemchat-multilingual datasets when tuning these models.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9.3 has a variety of instruction and conversational skills.
Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to Apache-2.0 We grant permission for any use, including commercial, that falls within accordance with said license. Dolphin was trained on data generated from GPT4, among other models.
Evals:
<img src="https://i.ibb.co/JpNLRYG/file-9-Fs-Qpm4-LKPz-CZy08-Bu-MZj-Xi-V.png" width="600" /> |
Quant-Cartel/TeTO-MS-8x7b-iMat-GGUF | Quant-Cartel | 2024-06-13T16:18:58Z | 564 | 1 | null | [
"gguf",
"conversational",
"mixtral",
"merge",
"mergekit",
"arxiv:2403.19522",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2024-06-11T03:20:56Z | ---
license: cc-by-nc-4.0
tags:
- conversational
- mixtral
- merge
- mergekit
---
```
e88 88e d8
d888 888b 8888 8888 ,"Y88b 888 8e d88
C8888 8888D 8888 8888 "8" 888 888 88b d88888
Y888 888P Y888 888P ,ee 888 888 888 888
"88 88" "88 88" "88 888 888 888 888
b
8b,
e88'Y88 d8 888
d888 'Y ,"Y88b 888,8, d88 ,e e, 888
C8888 "8" 888 888 " d88888 d88 88b 888
Y888 ,d ,ee 888 888 888 888 , 888
"88,d88 "88 888 888 888 "YeeP" 888
PROUDLY PRESENTS
```
<img src="https://files.catbox.moe/zdxyzv.png" width="400"/>
## TeTO-MS-8x7b-iMat-GGUF
<i>Weighted quants were made using the full precision fp16 model and groups_merged_enhancedV3.</i>
<u><b>Te</b></u>soro + <u><b>T</b></u>yphon + <u><b>O</b></u>penGPT
Presenting a Model Stock experiment combining the unique strengths from the following 8x7b Mixtral models:
* Tess-2.0-Mixtral-8x7B-v0.2 / [migtissera](https://huggingface.co/migtissera) / General Purpose
* Typhon-Mixtral-v1 / [Sao10K](https://huggingface.co/Sao10K) / Creative & Story Completion
* Open_Gpt4_8x7B_v0.2 / [rombodawg](https://huggingface.co/rombodawg) / Conversational
# Recommended Template
* Basic: Alpaca Format
* Advanced: See context/instruct/sampler settings in [our new Recommended Settings repo](https://huggingface.co/Quant-Cartel/Recommended-Settings/tree/main/Teto-MS-8x7b).
* Huge shout out to [rAIfle](https://huggingface.co/rAIfle) for his original work on the Wizard 8x22b templates which were modified for this model.
<H2>Methodology</H2>
> [I]nnovative layer-wise weight averaging technique surpasses state-of-the-art model methods such as Model Soup, utilizing only two fine-tuned models. This strategy can be aptly coined Model Stock, highlighting its reliance on selecting a minimal number of models to draw a more optimized-averaged model
<i> (From [arXiv:2403.19522](https://arxiv.org/pdf/2403.19522))</i>
* Methodology and merging process was based on the following paper - [Model Stock: All we need is just a few fine-tuned models](https://arxiv.org/abs/2403.19522)
* Initial model selection was based on top performing models of Mixtral architecture covering a variety of use cases and skills
* Base model (Mixtral Instruct 8x7b v0.1) was chosen after outperforming two other potential base models in terms of MMLU benchmark performance.
# Output
<img src="https://files.catbox.moe/bw97yg.PNG" width="400"/>
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using Mixtral-8x7B-v0.1-Instruct as a base.
### Models Merged
The following models were included in the merge:
* migtissera_Tess-2.0-Mixtral-8x7B-v0.2
* rombodawg_Open_Gpt4_8x7B_v0.2
* Sao10K_Typhon-Mixtral-v1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: models/migtissera_Tess-2.0-Mixtral-8x7B-v0.2
- model: models/Sao10K_Typhon-Mixtral-v1
- model: models/rombodawg_Open_Gpt4_8x7B_v0.2
merge_method: model_stock
base_model: models/Mixtral-8x7B-v0.1-Instruct
dtype: float16
```
## Appendix - Llama.cpp MMLU Benchmark Results*
<i>These results were calculated via perplexity.exe from llama.cpp using the following params:</i>
`.\perplexity -m .\models\TeTO-8x7b-MS-v0.03\TeTO-MS-8x7b-Q6_K.gguf -bf .\evaluations\mmlu-test.bin --multiple-choice -c 8192 -t 23 -ngl 200`
```
* V0.01 (4 model / Mixtral Base):
Final result: 43.3049 +/- 0.4196
Random chance: 25.0000 +/- 0.3667
* V0.02 (3 model / Tess Mixtral Base):
Final result: 43.8356 +/- 0.4202
Random chance: 25.0000 +/- 0.3667
* V0.03 (4 model / Mixtral Instruct Base):
Final result: 45.7004 +/- 0.4219
Random chance: 25.0000 +/- 0.3667
```
*Please be advised metrics above are not representative of final HF benchmark scores for reasons given [here](https://github.com/ggerganov/llama.cpp/pull/5047) |
abmorton/standard-small-2 | abmorton | 2024-06-30T23:07:21Z | 564 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-06-30T23:03:17Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### standard-small-2 Dreambooth model trained by abmorton with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ai-forever/ruclip-vit-base-patch16-384 | ai-forever | 2022-01-11T02:29:57Z | 563 | 1 | transformers | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | # ruclip-vit-base-patch16-384
**RuCLIP** (**Ru**ssian **C**ontrastive **L**anguage–**I**mage **P**retraining) is a multimodal model
for obtaining images and text similarities and rearranging captions and pictures.
RuCLIP builds on a large body of work on zero-shot transfer, computer vision, natural language processing and
multimodal learning.
Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
* Task: `text ranking`; `image ranking`; `zero-shot image classification`;
* Type: `encoder`
* Num Parameters: `150M`
* Training Data Volume: `240 million text-image pairs`
* Language: `Russian`
* Context Length: `77`
* Transformer Layers: `12`
* Transformer Width: `512`
* Transformer Heads: `8`
* Image Size: `384`
* Vision Layers: `12`
* Vision Width: `768`
* Vision Patch Size: `16`
## Usage [Github](https://github.com/sberbank-ai/ru-clip)
```
pip install ruclip
```
```python
clip, processor = ruclip.load("ruclip-vit-base-patch16-384", device="cuda")
```
## Performance
We have evaluated the performance on the following datasets:
| Dataset | Metric Name | Metric Result |
|:--------------|:---------------|:--------------------|
| Food101 | acc | 0.689 |
| CIFAR10 | acc | 0.845 |
| CIFAR100 | acc | 0.569 |
| Birdsnap | acc | 0.195 |
| SUN397 | acc | 0.521 |
| Stanford Cars | acc | 0.626 |
| DTD | acc | 0.421 |
| MNIST | acc | 0.478 |
| STL10 | acc | 0.964 |
| PCam | acc | 0.501 |
| CLEVR | acc | 0.132 |
| Rendered SST2 | acc | 0.525 |
| ImageNet | acc | 0.482 |
| FGVC Aircraft | mean-per-class | 0.046 |
| Oxford Pets | mean-per-class | 0.635 |
| Caltech101 | mean-per-class | 0.835 |
| Flowers102 | mean-per-class | 0.452 |
| HatefulMemes | roc-auc | 0.543 |
# Authors
+ Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov)
+ Daniil Chesakov: [Github](https://github.com/Danyache)
+ Denis Dimitrov: [Github](https://github.com/denndimitrov)
+ Igor Pavlov: [Github](https://github.com/boomb0om)
|
DucHaiten/DH_ClassicAnime | DucHaiten | 2023-03-02T17:04:56Z | 563 | 49 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-13T15:41:07Z | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
---
I don't know about you, but in my opinion this is the best anime model I've ever created. With a bit of romance, a little bit of classic and indispensable NSFW, this is my favorite anime model. I even intended to sell it but changed my mind in the end, it wouldn't be good if it couldn't be used by everyone.
After studying this model for a while, I have learned some experiences to create better images:
1. always add the keyword **(80s anime style)** at the beginning of the prompt. added gta style, the trigger keyword is **(gtav style)** note only one keyword can be added in the prompt, gta no anime, anime no gta
2. use this negative prompt <pre>illustration, painting, cartoons, sketch, (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyebrows, vaginas in breasts, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error</pre>
3. CFG Scale to range from 12.5 to 15
Note that my sample image has no VAE













|
TheBloke/OrcaMaidXL-17B-32k-GGUF | TheBloke | 2023-12-21T13:34:28Z | 563 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"base_model:ddh0/OrcaMaidXL-17B-32k",
"license:other",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-12-21T11:48:13Z | ---
base_model: ddh0/OrcaMaidXL-17B-32k
inference: false
license: other
license_link: https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE
license_name: microsoft-research-license
model_creator: ddh0
model_name: OrcaMaidXL 17B 32K
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# OrcaMaidXL 17B 32K - GGUF
- Model creator: [ddh0](https://huggingface.co/ddh0)
- Original model: [OrcaMaidXL 17B 32K](https://huggingface.co/ddh0/OrcaMaidXL-17B-32k)
<!-- description start -->
## Description
This repo contains GGUF format model files for [ddh0's OrcaMaidXL 17B 32K](https://huggingface.co/ddh0/OrcaMaidXL-17B-32k).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF)
* [ddh0's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ddh0/OrcaMaidXL-17B-32k)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [ddh0's OrcaMaidXL 17B 32K](https://huggingface.co/ddh0/OrcaMaidXL-17B-32k).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [orcamaidxl-17b-32k.Q2_K.gguf](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF/blob/main/orcamaidxl-17b-32k.Q2_K.gguf) | Q2_K | 2 | 7.26 GB| 9.76 GB | smallest, significant quality loss - not recommended for most purposes |
| [orcamaidxl-17b-32k.Q3_K_S.gguf](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF/blob/main/orcamaidxl-17b-32k.Q3_K_S.gguf) | Q3_K_S | 3 | 7.57 GB| 10.07 GB | very small, high quality loss |
| [orcamaidxl-17b-32k.Q3_K_M.gguf](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF/blob/main/orcamaidxl-17b-32k.Q3_K_M.gguf) | Q3_K_M | 3 | 8.48 GB| 10.98 GB | very small, high quality loss |
| [orcamaidxl-17b-32k.Q3_K_L.gguf](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF/blob/main/orcamaidxl-17b-32k.Q3_K_L.gguf) | Q3_K_L | 3 | 9.28 GB| 11.78 GB | small, substantial quality loss |
| [orcamaidxl-17b-32k.Q4_0.gguf](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF/blob/main/orcamaidxl-17b-32k.Q4_0.gguf) | Q4_0 | 4 | 9.87 GB| 12.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [orcamaidxl-17b-32k.Q4_K_S.gguf](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF/blob/main/orcamaidxl-17b-32k.Q4_K_S.gguf) | Q4_K_S | 4 | 9.91 GB| 12.41 GB | small, greater quality loss |
| [orcamaidxl-17b-32k.Q4_K_M.gguf](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF/blob/main/orcamaidxl-17b-32k.Q4_K_M.gguf) | Q4_K_M | 4 | 10.52 GB| 13.02 GB | medium, balanced quality - recommended |
| [orcamaidxl-17b-32k.Q5_0.gguf](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF/blob/main/orcamaidxl-17b-32k.Q5_0.gguf) | Q5_0 | 5 | 12.03 GB| 14.53 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [orcamaidxl-17b-32k.Q5_K_S.gguf](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF/blob/main/orcamaidxl-17b-32k.Q5_K_S.gguf) | Q5_K_S | 5 | 12.03 GB| 14.53 GB | large, low quality loss - recommended |
| [orcamaidxl-17b-32k.Q5_K_M.gguf](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF/blob/main/orcamaidxl-17b-32k.Q5_K_M.gguf) | Q5_K_M | 5 | 12.36 GB| 14.86 GB | large, very low quality loss - recommended |
| [orcamaidxl-17b-32k.Q6_K.gguf](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF/blob/main/orcamaidxl-17b-32k.Q6_K.gguf) | Q6_K | 6 | 14.32 GB| 16.82 GB | very large, extremely low quality loss |
| [orcamaidxl-17b-32k.Q8_0.gguf](https://huggingface.co/TheBloke/OrcaMaidXL-17B-32k-GGUF/blob/main/orcamaidxl-17b-32k.Q8_0.gguf) | Q8_0 | 8 | 18.55 GB| 21.05 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/OrcaMaidXL-17B-32k-GGUF and below it, a specific filename to download, such as: orcamaidxl-17b-32k.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/OrcaMaidXL-17B-32k-GGUF orcamaidxl-17b-32k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/OrcaMaidXL-17B-32k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/OrcaMaidXL-17B-32k-GGUF orcamaidxl-17b-32k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m orcamaidxl-17b-32k.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./orcamaidxl-17b-32k.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./orcamaidxl-17b-32k.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: ddh0's OrcaMaidXL 17B 32K
# OrcaMaidXL-17B-32k
This is a a slightly experimental frankenmerge of Microsoft's [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) and Undi and IkariDev's [Noromaid-v0.2-13b](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2).
The model recipe is as follows:
```
- [0, 12] from Noromaid (12 layers)
- [12, 22] from Orca (10 layers)
- [14, 24] from Orca (10 layers)
- [18, 28] from Orca (10 layers)
- [28, 40] from Noromaid (12 layers)
```
In my testing so far, the model performs exceptionally well. Your experience may vary.
The prompt format is Alpaca. You can use the standard format as shown, but for best results, you should customize the system prompt to your specific needs.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{YOUR MESSAGE HERE}
### Response:
{BOT MESSAGE HERE}
```
### Misc. information
- Model size is 17.46B
- BOS token is `<s>`
- EOS token is `</s>`
- Native context length is `32768` via YaRN (original context length was `4096`)
- Base model is Llama 2
- Due to the inclusion of Orca-2-13b, the model is subject to the terms of the [Microsoft Research License](https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE)
### Thanks
- Thanks to [Undi](https://ko-fi.com/undiai) and [IkariDev](https://ikaridevgit.github.io/) for Noromaid
<!-- original-model-card end -->
|
mradermacher/llava-v1.6-mistral-7b-GGUF | mradermacher | 2024-05-06T05:42:08Z | 563 | 3 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TitanML/llava-v1.6-mistral-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-29T04:30:01Z | ---
base_model: TitanML/llava-v1.6-mistral-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/TitanML/llava-v1.6-mistral-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llava-v1.6-mistral-7b-GGUF/resolve/main/llava-v1.6-mistral-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF | mradermacher | 2024-05-06T05:09:35Z | 563 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Aeala/GPT4-x-AlpacaDente2-30b",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-07T21:25:07Z | ---
base_model: Aeala/GPT4-x-AlpacaDente2-30b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Aeala/GPT4-x-AlpacaDente2-30b
**This uses only 40k tokens of my standard set, as the model overflowed with more.**
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ1_M.gguf) | i1-IQ1_M | 7.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q2_K.gguf) | i1-Q2_K | 12.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ3_S.gguf) | i1-IQ3_S | 14.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q4_0.gguf) | i1-Q4_0 | 18.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/GPT4-x-AlpacaDente2-30b-i1-GGUF/resolve/main/GPT4-x-AlpacaDente2-30b.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lmstudio-community/codegemma-7b-it-GGUF | lmstudio-community | 2024-04-09T15:17:37Z | 563 | 10 | transformers | [
"transformers",
"gguf",
"text-generation",
"license:gemma",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-09T14:56:41Z | ---
library_name: transformers
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access CodeGemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
pipeline_tag: text-generation
widget:
- text: >
<start_of_turn>user
Write a Python function to calculate the nth fibonacci number.<end_of_turn>
<start_of_turn>model
inference:
parameters:
max_new_tokens: 200
license: gemma
license_link: https://ai.google.dev/gemma/terms
quantized_by: bartowski
lm_studio:
param_count: 8b
use_case: coding
release_date: 09-04-2024
model_creator: google
prompt_template: Google Gemma Instruct
system_prompt: none
base_model: gemma
original_repo: google/codegemma-7b-it
---
## 💫 Community Model> CodeGemma 7b Instruct by Google
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Google](https://huggingface.co/google)<br>
**Original model**: [google/codegemma-7b-it](https://huggingface.co/google/codegemma-7b-it)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2589](https://github.com/ggerganov/llama.cpp/releases/tag/b2589)<br>
## Model Summary:
CodeGemma 7b Instruct is the first in a series of coding models released by Google. As an instruct model, it specializes in being asked coding related questions, but can also function as an autocomplete/fill-in-middle model for tools like co-pilot.<br>
This model is perfect for general coding questions or code generation.
## Prompt Template:
Choose the 'Google Gemma Instruct' preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
<start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
```
Note that this model does not support a System prompt.
## Use case and examples
This model should be used for coding purposes.
### Coding 1
```
<start_of_turn>user
Write a quicksort in C++<end_of_turn>
<start_of_turn>model
#include <iostream>
using namespace std;
int partition(int arr[], int low, int high) {
int pivot = arr[high];
int i = (low - 1);
for (int j = low; j <= high - 1; j++) {
if (arr[j] <= pivot) {
i++;
swap(arr[i], arr[j]);
}
}
swap(arr[i + 1], arr[high]);
return (i + 1);
}
void quickSort(int arr[], int low, int high) {
if (low < high) {
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
int main() {
int arr[] = {10, 7, 8, 9, 1, 5};
int n = sizeof(arr) / sizeof(arr[0]);
quickSort(arr, 0, n - 1);
cout << "Sorted array: ";
for (int i = 0; i < n; i++) {
cout << arr[i] << " ";
}
return 0;
}
<end_of_turn>
```
### Coding 2
```
<start_of_turn>user
Write a merge sort in java<end_of_turn>
<start_of_turn>model
public class MergeSort {
public static void mergeSort(int[] arr) {
int n = arr.length;
if (n < 2) {
return;
}
int mid = n / 2;
int[] left = new int[mid];
int[] right = new int[n - mid];
for (int i = 0; i < mid; i++) {
left[i] = arr[i];
}
for (int i = mid; i < n; i++) {
right[i - mid] = arr[i];
}
mergeSort(left);
mergeSort(right);
merge(arr, left, right);
}
public static void merge(int[] arr, int[] left, int[] right) {
int i = 0;
int j = 0;
int k = 0;
while (i < left.length && j < right.length) {
if (left[i] <= right[j]) {
arr[k] = left[i];
i++;
} else {
arr[k] = right[j];
j++;
}
k++;
}
while (i < left.length) {
arr[k] = left[i];
i++;
k++;
}
while (j < right.length) {
arr[k] = right[j];
j++;
k++;
}
}
public static void main(String[] args) {
int[] arr = {5, 2, 4, 6, 1, 3};
mergeSort(arr);
for (int i = 0; i < arr.length; i++) {
System.out.print(arr[i] + " ");
}
}
}
<end_of_turn>
```
### Coding 3
```
<bos><start_of_turn>user
How do you write a for loop in react?<end_of_turn>
<start_of_turn>model
The syntax for a for loop in React is:
for (let i = 0; i < array.length; i++) {
// Code to be executed for each iteration
}
Where:
* `i` is the loop variable, which starts at 0 and increments by 1 with each iteration.
* `array` is the array you want to iterate over.
* `array.length` is the length of the array.
* `// Code to be executed for each iteration` is the code you want to execute for each iteration of the loop.
Here is an example of how to use a for loop to render a list of items:
const items = ["item 1", "item 2", "item 3"];
const listItems = items.map((item, index) => (
<li key={index}>{item}</li>
));
return (
<ul>
{listItems}
</ul>
);
In this example, the `map()` method is used to create a new array of `li` elements. The `key` prop is used to uniquely identify each `li` element.
Here are some other things to keep in mind when using for loops in React:
* You can use any variable name for the loop variable.
* You can use any condition for the loop condition.
* You can use any code inside the loop.
* You can use nested for loops.
Here are some resources that you may find helpful:
* **For Loop in JavaScript:** https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for
* **Map() method:** https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map
* **Key prop:** https://reactjs.org/docs/lists-and-keys.html#keys<end_of_turn>
```
## Coding related question
```
<bos><start_of_turn>user
When would you use an array versus a dictionary?<end_of_turn>
<start_of_turn>model
**Arrays** and **dictionaries** are both data structures used in computer programming, but they have different purposes and characteristics.
**Arrays** are ordered collections of elements of the same data type. They are indexed by their position in the array, starting from 0. Arrays are ideal for storing and accessing data in a sequential manner.
**Dictionaries**, on the other hand, are unordered collections of key-value pairs. Keys are unique identifiers that are used to access the corresponding values. Dictionaries are ideal for storing and accessing data based on a specific key.
**When to use an array:**
* When you need to store a fixed number of elements of the same data type.
* When you need to access elements by their position in the array.
* When you need to iterate over the elements in a specific order.
**When to use a dictionary:**
* When you need to store a variable number of elements of different data types.
* When you need to access elements by their key.
* When you need to quickly check if a key exists in the dictionary.
**Example:**
**Array:**
# Create an array of integers
numbers = [1, 2, 3, 4, 5]
# Access the first element
print(numbers[0]) # Output: 1
# Iterate over the elements
for number in numbers:
print(number)
**Dictionary:**
# Create a dictionary of names and ages
names_ages = {"John": 30, "Jane": 25, "Peter": 40}
# Access the age of John
print(names_ages["John"]) # Output: 30
# Check if a key exists
if "Mary" in names_ages:
print("Mary is in the dictionary")
else:
print("Mary is not in the dictionary")
**In summary:**
* Use an array when you need to store a fixed number of elements of the same data type and access them by their position.
* Use a dictionary when you need to store a variable number of elements of different data types and access them by their key.<end_of_turn>
```
## Technical Details
CodeGemma is based on the Gemma 7b model with additional training on web documents, mathematics, and code, with a mixture of 80% code and 20% natural language.
The code used is based on publicly avaialble code repositories.
The instruct version was further trained on mathematical datasets in an attempt to improve its mathematical reasoning capabilities, as well as synthetic code generation combined with a second LLM for evaluation and reinforcement feedback.
Additional details can be found on Google's official report PDF [here](https://storage.googleapis.com/deepmind-media/gemma/codegemma_report.pdf)
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for these quants, which improves the overall quality!
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
BrachioLab/supernova-classification | BrachioLab | 2024-04-16T19:40:00Z | 563 | 0 | transformers | [
"transformers",
"pytorch",
"informer",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-04-16T19:34:16Z | ---
license: mit
---
## Usage
```python
import torch
from informer_models import InformerConfig, InformerForSequenceClassification
model = InformerForSequenceClassification.from_pretrained("BrachioLab/supernova-classification")
model.to(device)
model.eval()
y_true = []
y_pred = []
for i, batch in enumerate(test_dataloader):
print(f"processing batch {i}")
batch = {k: v.to(device) for k, v in batch.items() if k != "objid"}
with torch.no_grad():
outputs = model(**batch)
y_true.extend(batch['labels'].cpu().numpy())
y_pred.extend(torch.argmax(outputs.logits, dim=2).squeeze().cpu().numpy())
print(f"accuracy: {sum([1 for i, j in zip(y_true, y_pred) if i == j]) / len(y_true)}")
``` |
LLyq/ms_detr_finetuned_diana | LLyq | 2024-06-03T14:41:10Z | 563 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"conditional_detr",
"object-detection",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/conditional-detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2024-05-30T08:24:45Z | ---
license: apache-2.0
base_model: microsoft/conditional-detr-resnet-50
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: ms_detr_finetuned_diana
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ms_detr_finetuned_diana
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3617
- Map: 0.6874
- Map 50: 0.789
- Map 75: 0.7871
- Map Small: -1.0
- Map Medium: 0.7147
- Map Large: 0.6892
- Mar 1: 0.0969
- Mar 10: 0.7163
- Mar 100: 0.7819
- Mar Small: -1.0
- Mar Medium: 0.75
- Mar Large: 0.7947
- Map Per Class: -1.0
- Mar 100 Per Class: -1.0
- Classes: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Per Class | Mar 100 Per Class | Classes |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:-------------:|:-----------------:|:-------:|
| 2.892 | 1.0 | 10 | 2.2713 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 1.717 | 2.0 | 20 | 1.6999 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 1.5162 | 3.0 | 30 | 1.4320 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 1.3622 | 4.0 | 40 | 1.2202 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 1.1926 | 5.0 | 50 | 1.1617 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 1.2362 | 6.0 | 60 | 1.1772 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 1.3114 | 7.0 | 70 | 1.0437 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 1.1188 | 8.0 | 80 | 0.9656 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 0.9169 | 9.0 | 90 | 0.8787 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 0.7998 | 10.0 | 100 | 0.7928 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 0.7385 | 11.0 | 110 | 0.6800 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 0.6697 | 12.0 | 120 | 0.6025 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 0.4984 | 13.0 | 130 | 0.5722 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0 |
| 0.5245 | 14.0 | 140 | 0.5460 | 0.0079 | 0.0099 | 0.0099 | -1.0 | 0.0238 | 0.0 | 0.005 | 0.005 | 0.005 | -1.0 | 0.0174 | 0.0 | -1.0 | -1.0 | 0 |
| 0.3993 | 15.0 | 150 | 0.5030 | 0.0168 | 0.0198 | 0.0198 | -1.0 | 0.0267 | 0.0079 | 0.0106 | 0.0106 | 0.0106 | -1.0 | 0.0196 | 0.007 | -1.0 | -1.0 | 0 |
| 0.4789 | 16.0 | 160 | 0.4662 | 0.2283 | 0.2665 | 0.2665 | -1.0 | 0.1539 | 0.2598 | 0.0794 | 0.2438 | 0.2438 | -1.0 | 0.1543 | 0.2798 | -1.0 | -1.0 | 0 |
| 0.5485 | 17.0 | 170 | 0.4586 | 0.2041 | 0.2369 | 0.2369 | -1.0 | 0.1679 | 0.2202 | 0.0731 | 0.2175 | 0.2175 | -1.0 | 0.1717 | 0.236 | -1.0 | -1.0 | 0 |
| 0.3419 | 18.0 | 180 | 0.4637 | 0.3674 | 0.4292 | 0.4292 | -1.0 | 0.2666 | 0.4049 | 0.0856 | 0.3994 | 0.3994 | -1.0 | 0.2696 | 0.4518 | -1.0 | -1.0 | 0 |
| 0.4885 | 19.0 | 190 | 0.5509 | 0.4468 | 0.5407 | 0.5314 | -1.0 | 0.354 | 0.4898 | 0.0925 | 0.5156 | 0.5156 | -1.0 | 0.3761 | 0.5719 | -1.0 | -1.0 | 0 |
| 0.3336 | 20.0 | 200 | 0.5122 | 0.1809 | 0.2149 | 0.2149 | -1.0 | 0.0802 | 0.2257 | 0.0763 | 0.195 | 0.195 | -1.0 | 0.0783 | 0.2421 | -1.0 | -1.0 | 0 |
| 0.3471 | 21.0 | 210 | 0.4619 | 0.4291 | 0.519 | 0.519 | -1.0 | 0.3115 | 0.4787 | 0.0906 | 0.465 | 0.4706 | -1.0 | 0.3174 | 0.5325 | -1.0 | -1.0 | 0 |
| 0.3953 | 22.0 | 220 | 0.4313 | 0.4938 | 0.5927 | 0.581 | -1.0 | 0.3728 | 0.5476 | 0.0887 | 0.5319 | 0.5437 | -1.0 | 0.3891 | 0.6061 | -1.0 | -1.0 | 0 |
| 0.5373 | 23.0 | 230 | 0.4308 | 0.5264 | 0.6339 | 0.623 | -1.0 | 0.4223 | 0.5733 | 0.0938 | 0.5744 | 0.5894 | -1.0 | 0.4391 | 0.65 | -1.0 | -1.0 | 0 |
| 0.3092 | 24.0 | 240 | 0.4193 | 0.5561 | 0.6661 | 0.6571 | -1.0 | 0.4775 | 0.5927 | 0.0944 | 0.5844 | 0.6169 | -1.0 | 0.5022 | 0.6632 | -1.0 | -1.0 | 0 |
| 0.477 | 25.0 | 250 | 0.4125 | 0.41 | 0.4769 | 0.4769 | -1.0 | 0.3283 | 0.4467 | 0.1006 | 0.4387 | 0.4487 | -1.0 | 0.3348 | 0.4947 | -1.0 | -1.0 | 0 |
| 0.3867 | 26.0 | 260 | 0.4114 | 0.6146 | 0.743 | 0.7168 | -1.0 | 0.6036 | 0.6294 | 0.0938 | 0.6381 | 0.6981 | -1.0 | 0.6304 | 0.7254 | -1.0 | -1.0 | 0 |
| 0.3658 | 27.0 | 270 | 0.4001 | 0.6643 | 0.7856 | 0.7727 | -1.0 | 0.6505 | 0.6817 | 0.1 | 0.6787 | 0.7456 | -1.0 | 0.6935 | 0.7667 | -1.0 | -1.0 | 0 |
| 0.3053 | 28.0 | 280 | 0.4282 | 0.5218 | 0.6326 | 0.6201 | -1.0 | 0.4163 | 0.5676 | 0.095 | 0.5631 | 0.5844 | -1.0 | 0.4261 | 0.6482 | -1.0 | -1.0 | 0 |
| 0.3105 | 29.0 | 290 | 0.4398 | 0.6187 | 0.7567 | 0.7331 | -1.0 | 0.6045 | 0.6336 | 0.0919 | 0.6413 | 0.6944 | -1.0 | 0.6478 | 0.7132 | -1.0 | -1.0 | 0 |
| 0.3713 | 30.0 | 300 | 0.4202 | 0.5643 | 0.6683 | 0.6673 | -1.0 | 0.4959 | 0.5996 | 0.0931 | 0.5969 | 0.6306 | -1.0 | 0.5348 | 0.6693 | -1.0 | -1.0 | 0 |
| 0.2874 | 31.0 | 310 | 0.4018 | 0.6327 | 0.7547 | 0.7357 | -1.0 | 0.5647 | 0.6713 | 0.0981 | 0.6575 | 0.715 | -1.0 | 0.5957 | 0.7632 | -1.0 | -1.0 | 0 |
| 0.2809 | 32.0 | 320 | 0.4007 | 0.6444 | 0.7618 | 0.7412 | -1.0 | 0.6189 | 0.667 | 0.1013 | 0.6694 | 0.7381 | -1.0 | 0.6587 | 0.7702 | -1.0 | -1.0 | 0 |
| 0.2801 | 33.0 | 330 | 0.4528 | 0.6125 | 0.7386 | 0.727 | -1.0 | 0.5399 | 0.6519 | 0.0938 | 0.6338 | 0.6994 | -1.0 | 0.5739 | 0.75 | -1.0 | -1.0 | 0 |
| 0.3484 | 34.0 | 340 | 0.4233 | 0.6226 | 0.7505 | 0.7381 | -1.0 | 0.6268 | 0.642 | 0.0988 | 0.64 | 0.7194 | -1.0 | 0.6543 | 0.7456 | -1.0 | -1.0 | 0 |
| 0.3591 | 35.0 | 350 | 0.4156 | 0.6475 | 0.759 | 0.7576 | -1.0 | 0.6733 | 0.6536 | 0.095 | 0.6712 | 0.74 | -1.0 | 0.7065 | 0.7535 | -1.0 | -1.0 | 0 |
| 0.2204 | 36.0 | 360 | 0.4363 | 0.5723 | 0.6577 | 0.6577 | -1.0 | 0.548 | 0.5962 | 0.0956 | 0.6112 | 0.6456 | -1.0 | 0.5717 | 0.6754 | -1.0 | -1.0 | 0 |
| 0.3912 | 37.0 | 370 | 0.4261 | 0.6855 | 0.8128 | 0.8032 | -1.0 | 0.6777 | 0.7012 | 0.1006 | 0.7063 | 0.7806 | -1.0 | 0.7217 | 0.8044 | -1.0 | -1.0 | 0 |
| 0.3377 | 38.0 | 380 | 0.4260 | 0.6743 | 0.7969 | 0.7847 | -1.0 | 0.6147 | 0.7086 | 0.1025 | 0.69 | 0.7444 | -1.0 | 0.6413 | 0.786 | -1.0 | -1.0 | 0 |
| 0.3047 | 39.0 | 390 | 0.4211 | 0.6519 | 0.7793 | 0.7647 | -1.0 | 0.6241 | 0.6742 | 0.1006 | 0.6681 | 0.7175 | -1.0 | 0.6457 | 0.7465 | -1.0 | -1.0 | 0 |
| 0.2563 | 40.0 | 400 | 0.4313 | 0.6441 | 0.769 | 0.769 | -1.0 | 0.724 | 0.6211 | 0.1006 | 0.6669 | 0.7275 | -1.0 | 0.7543 | 0.7167 | -1.0 | -1.0 | 0 |
| 0.3127 | 41.0 | 410 | 0.4297 | 0.5739 | 0.6679 | 0.6593 | -1.0 | 0.5456 | 0.5969 | 0.0975 | 0.5987 | 0.6438 | -1.0 | 0.563 | 0.6763 | -1.0 | -1.0 | 0 |
| 0.2782 | 42.0 | 420 | 0.4133 | 0.6234 | 0.7267 | 0.7179 | -1.0 | 0.6279 | 0.6346 | 0.0962 | 0.6394 | 0.7019 | -1.0 | 0.6457 | 0.7246 | -1.0 | -1.0 | 0 |
| 0.2965 | 43.0 | 430 | 0.4304 | 0.5708 | 0.684 | 0.664 | -1.0 | 0.5152 | 0.6021 | 0.0938 | 0.6225 | 0.6538 | -1.0 | 0.5326 | 0.7026 | -1.0 | -1.0 | 0 |
| 0.2599 | 44.0 | 440 | 0.4240 | 0.6451 | 0.7575 | 0.7445 | -1.0 | 0.67 | 0.6528 | 0.0975 | 0.6762 | 0.7506 | -1.0 | 0.6978 | 0.7719 | -1.0 | -1.0 | 0 |
| 0.2821 | 45.0 | 450 | 0.4361 | 0.695 | 0.8218 | 0.796 | -1.0 | 0.7193 | 0.6945 | 0.0969 | 0.7287 | 0.79 | -1.0 | 0.7587 | 0.8026 | -1.0 | -1.0 | 0 |
| 0.3117 | 46.0 | 460 | 0.4164 | 0.7032 | 0.8334 | 0.8051 | -1.0 | 0.7114 | 0.7094 | 0.1019 | 0.7331 | 0.7925 | -1.0 | 0.7391 | 0.814 | -1.0 | -1.0 | 0 |
| 0.2484 | 47.0 | 470 | 0.4007 | 0.6757 | 0.7805 | 0.7805 | -1.0 | 0.7268 | 0.6714 | 0.1037 | 0.7088 | 0.7656 | -1.0 | 0.7543 | 0.7702 | -1.0 | -1.0 | 0 |
| 0.3059 | 48.0 | 480 | 0.4269 | 0.651 | 0.7539 | 0.7496 | -1.0 | 0.599 | 0.6866 | 0.0956 | 0.6819 | 0.7294 | -1.0 | 0.6174 | 0.7746 | -1.0 | -1.0 | 0 |
| 0.2492 | 49.0 | 490 | 0.3877 | 0.6452 | 0.7506 | 0.7277 | -1.0 | 0.6343 | 0.6624 | 0.09 | 0.6794 | 0.7244 | -1.0 | 0.6565 | 0.7518 | -1.0 | -1.0 | 0 |
| 0.3828 | 50.0 | 500 | 0.4237 | 0.6721 | 0.7953 | 0.7744 | -1.0 | 0.7174 | 0.6672 | 0.0887 | 0.7056 | 0.7606 | -1.0 | 0.75 | 0.7649 | -1.0 | -1.0 | 0 |
| 0.2737 | 51.0 | 510 | 0.3713 | 0.6619 | 0.7594 | 0.7453 | -1.0 | 0.6988 | 0.6615 | 0.0925 | 0.6919 | 0.7419 | -1.0 | 0.7283 | 0.7474 | -1.0 | -1.0 | 0 |
| 0.3283 | 52.0 | 520 | 0.3737 | 0.6298 | 0.7286 | 0.7254 | -1.0 | 0.6199 | 0.6431 | 0.0894 | 0.6575 | 0.7019 | -1.0 | 0.6478 | 0.7237 | -1.0 | -1.0 | 0 |
| 0.2819 | 53.0 | 530 | 0.4077 | 0.6919 | 0.8094 | 0.8004 | -1.0 | 0.7656 | 0.6797 | 0.0894 | 0.7106 | 0.7831 | -1.0 | 0.7978 | 0.7772 | -1.0 | -1.0 | 0 |
| 0.2533 | 54.0 | 540 | 0.4056 | 0.7032 | 0.8249 | 0.8045 | -1.0 | 0.724 | 0.7105 | 0.0962 | 0.7381 | 0.7962 | -1.0 | 0.7522 | 0.814 | -1.0 | -1.0 | 0 |
| 0.3408 | 55.0 | 550 | 0.3916 | 0.667 | 0.771 | 0.7591 | -1.0 | 0.6257 | 0.6997 | 0.0969 | 0.6913 | 0.7356 | -1.0 | 0.65 | 0.7702 | -1.0 | -1.0 | 0 |
| 0.2069 | 56.0 | 560 | 0.3931 | 0.7054 | 0.8197 | 0.7998 | -1.0 | 0.7399 | 0.7061 | 0.0962 | 0.7188 | 0.785 | -1.0 | 0.7652 | 0.793 | -1.0 | -1.0 | 0 |
| 0.2572 | 57.0 | 570 | 0.4012 | 0.6993 | 0.8117 | 0.7932 | -1.0 | 0.729 | 0.699 | 0.1025 | 0.7275 | 0.7862 | -1.0 | 0.7609 | 0.7965 | -1.0 | -1.0 | 0 |
| 0.1786 | 58.0 | 580 | 0.3830 | 0.7114 | 0.8231 | 0.8047 | -1.0 | 0.7666 | 0.7065 | 0.0975 | 0.7325 | 0.8056 | -1.0 | 0.7935 | 0.8105 | -1.0 | -1.0 | 0 |
| 0.2185 | 59.0 | 590 | 0.3609 | 0.705 | 0.8153 | 0.806 | -1.0 | 0.7388 | 0.6997 | 0.0919 | 0.7194 | 0.7912 | -1.0 | 0.7783 | 0.7965 | -1.0 | -1.0 | 0 |
| 0.2219 | 60.0 | 600 | 0.3783 | 0.7086 | 0.8241 | 0.8092 | -1.0 | 0.7106 | 0.7192 | 0.0969 | 0.74 | 0.805 | -1.0 | 0.7391 | 0.8316 | -1.0 | -1.0 | 0 |
| 0.3003 | 61.0 | 610 | 0.4098 | 0.7118 | 0.8178 | 0.8085 | -1.0 | 0.7534 | 0.7084 | 0.0919 | 0.7394 | 0.8138 | -1.0 | 0.7891 | 0.8237 | -1.0 | -1.0 | 0 |
| 0.3144 | 62.0 | 620 | 0.4339 | 0.6867 | 0.7967 | 0.7794 | -1.0 | 0.7074 | 0.6913 | 0.0894 | 0.7294 | 0.7875 | -1.0 | 0.7435 | 0.8053 | -1.0 | -1.0 | 0 |
| 0.2323 | 63.0 | 630 | 0.4086 | 0.6733 | 0.7901 | 0.7787 | -1.0 | 0.692 | 0.6792 | 0.0944 | 0.715 | 0.7719 | -1.0 | 0.7239 | 0.7912 | -1.0 | -1.0 | 0 |
| 0.3114 | 64.0 | 640 | 0.3946 | 0.6801 | 0.7905 | 0.7658 | -1.0 | 0.6803 | 0.6931 | 0.0981 | 0.7188 | 0.7713 | -1.0 | 0.7043 | 0.7982 | -1.0 | -1.0 | 0 |
| 0.2579 | 65.0 | 650 | 0.3899 | 0.6658 | 0.7882 | 0.7637 | -1.0 | 0.6463 | 0.6861 | 0.0988 | 0.7013 | 0.7613 | -1.0 | 0.6739 | 0.7965 | -1.0 | -1.0 | 0 |
| 0.2312 | 66.0 | 660 | 0.3816 | 0.6567 | 0.7604 | 0.7548 | -1.0 | 0.6226 | 0.6793 | 0.0969 | 0.6919 | 0.7506 | -1.0 | 0.6478 | 0.7921 | -1.0 | -1.0 | 0 |
| 0.209 | 67.0 | 670 | 0.3967 | 0.699 | 0.8281 | 0.7953 | -1.0 | 0.7061 | 0.7074 | 0.0906 | 0.7394 | 0.7994 | -1.0 | 0.7348 | 0.8254 | -1.0 | -1.0 | 0 |
| 0.3113 | 68.0 | 680 | 0.4126 | 0.6991 | 0.8308 | 0.7838 | -1.0 | 0.7222 | 0.6999 | 0.0962 | 0.7337 | 0.7994 | -1.0 | 0.7522 | 0.8184 | -1.0 | -1.0 | 0 |
| 0.2045 | 69.0 | 690 | 0.4276 | 0.703 | 0.8227 | 0.7956 | -1.0 | 0.7344 | 0.7016 | 0.0838 | 0.735 | 0.8006 | -1.0 | 0.7652 | 0.8149 | -1.0 | -1.0 | 0 |
| 0.2002 | 70.0 | 700 | 0.4094 | 0.6843 | 0.7886 | 0.7836 | -1.0 | 0.6975 | 0.6908 | 0.0919 | 0.7175 | 0.7788 | -1.0 | 0.7239 | 0.8009 | -1.0 | -1.0 | 0 |
| 0.2065 | 71.0 | 710 | 0.4052 | 0.7065 | 0.8196 | 0.8056 | -1.0 | 0.7149 | 0.7146 | 0.0981 | 0.7337 | 0.8 | -1.0 | 0.7457 | 0.8219 | -1.0 | -1.0 | 0 |
| 0.2716 | 72.0 | 720 | 0.4000 | 0.7039 | 0.8294 | 0.8059 | -1.0 | 0.7229 | 0.7079 | 0.0956 | 0.7344 | 0.7987 | -1.0 | 0.7522 | 0.8175 | -1.0 | -1.0 | 0 |
| 0.2935 | 73.0 | 730 | 0.3905 | 0.652 | 0.7532 | 0.7475 | -1.0 | 0.6084 | 0.6786 | 0.0906 | 0.7038 | 0.7525 | -1.0 | 0.6326 | 0.8009 | -1.0 | -1.0 | 0 |
| 0.2137 | 74.0 | 740 | 0.3959 | 0.627 | 0.7307 | 0.725 | -1.0 | 0.6212 | 0.6476 | 0.0906 | 0.6794 | 0.7312 | -1.0 | 0.6522 | 0.7632 | -1.0 | -1.0 | 0 |
| 0.2075 | 75.0 | 750 | 0.3786 | 0.6542 | 0.769 | 0.7535 | -1.0 | 0.659 | 0.6667 | 0.0906 | 0.71 | 0.7619 | -1.0 | 0.6913 | 0.7904 | -1.0 | -1.0 | 0 |
| 0.1713 | 76.0 | 760 | 0.3836 | 0.6695 | 0.7851 | 0.7703 | -1.0 | 0.6683 | 0.6859 | 0.0906 | 0.7138 | 0.7781 | -1.0 | 0.7065 | 0.807 | -1.0 | -1.0 | 0 |
| 0.2233 | 77.0 | 770 | 0.3947 | 0.6659 | 0.775 | 0.7694 | -1.0 | 0.714 | 0.6616 | 0.0913 | 0.705 | 0.7763 | -1.0 | 0.7543 | 0.7851 | -1.0 | -1.0 | 0 |
| 0.2398 | 78.0 | 780 | 0.3835 | 0.6854 | 0.7997 | 0.7883 | -1.0 | 0.7067 | 0.6903 | 0.0906 | 0.7212 | 0.785 | -1.0 | 0.7391 | 0.8035 | -1.0 | -1.0 | 0 |
| 0.1906 | 79.0 | 790 | 0.3811 | 0.6901 | 0.8028 | 0.7974 | -1.0 | 0.7128 | 0.6948 | 0.0956 | 0.7231 | 0.7931 | -1.0 | 0.7478 | 0.8114 | -1.0 | -1.0 | 0 |
| 0.1823 | 80.0 | 800 | 0.3831 | 0.6721 | 0.7821 | 0.7709 | -1.0 | 0.7005 | 0.6764 | 0.0906 | 0.7094 | 0.775 | -1.0 | 0.737 | 0.7904 | -1.0 | -1.0 | 0 |
| 0.2266 | 81.0 | 810 | 0.3831 | 0.6973 | 0.8131 | 0.8007 | -1.0 | 0.6966 | 0.7103 | 0.0969 | 0.7319 | 0.7969 | -1.0 | 0.7326 | 0.8228 | -1.0 | -1.0 | 0 |
| 0.1812 | 82.0 | 820 | 0.3809 | 0.6934 | 0.8057 | 0.7933 | -1.0 | 0.6984 | 0.707 | 0.0962 | 0.7262 | 0.7919 | -1.0 | 0.7348 | 0.8149 | -1.0 | -1.0 | 0 |
| 0.1811 | 83.0 | 830 | 0.3820 | 0.6836 | 0.7953 | 0.7829 | -1.0 | 0.6797 | 0.7021 | 0.0969 | 0.7156 | 0.7819 | -1.0 | 0.7087 | 0.8114 | -1.0 | -1.0 | 0 |
| 0.1677 | 84.0 | 840 | 0.3851 | 0.6809 | 0.7891 | 0.7759 | -1.0 | 0.6854 | 0.6942 | 0.0962 | 0.7163 | 0.7763 | -1.0 | 0.7109 | 0.8026 | -1.0 | -1.0 | 0 |
| 0.159 | 85.0 | 850 | 0.3791 | 0.6802 | 0.7889 | 0.7765 | -1.0 | 0.6799 | 0.6954 | 0.0962 | 0.715 | 0.775 | -1.0 | 0.7087 | 0.8018 | -1.0 | -1.0 | 0 |
| 0.1646 | 86.0 | 860 | 0.3712 | 0.6856 | 0.7903 | 0.7785 | -1.0 | 0.6683 | 0.7073 | 0.0975 | 0.7212 | 0.7788 | -1.0 | 0.6978 | 0.8114 | -1.0 | -1.0 | 0 |
| 0.1618 | 87.0 | 870 | 0.3736 | 0.6817 | 0.7861 | 0.7753 | -1.0 | 0.7037 | 0.6883 | 0.0969 | 0.72 | 0.7781 | -1.0 | 0.7326 | 0.7965 | -1.0 | -1.0 | 0 |
| 0.144 | 88.0 | 880 | 0.3724 | 0.6804 | 0.7917 | 0.7688 | -1.0 | 0.7067 | 0.6832 | 0.0962 | 0.7138 | 0.7719 | -1.0 | 0.7304 | 0.7886 | -1.0 | -1.0 | 0 |
| 0.2508 | 89.0 | 890 | 0.3643 | 0.6792 | 0.7823 | 0.7692 | -1.0 | 0.7062 | 0.6828 | 0.0969 | 0.7125 | 0.7706 | -1.0 | 0.7283 | 0.7877 | -1.0 | -1.0 | 0 |
| 0.1579 | 90.0 | 900 | 0.3623 | 0.6996 | 0.8094 | 0.7962 | -1.0 | 0.7204 | 0.7044 | 0.0962 | 0.7256 | 0.7937 | -1.0 | 0.7522 | 0.8105 | -1.0 | -1.0 | 0 |
| 0.1625 | 91.0 | 910 | 0.3630 | 0.6985 | 0.8083 | 0.7959 | -1.0 | 0.718 | 0.7034 | 0.0962 | 0.7256 | 0.7937 | -1.0 | 0.7522 | 0.8105 | -1.0 | -1.0 | 0 |
| 0.1734 | 92.0 | 920 | 0.3626 | 0.6876 | 0.7894 | 0.787 | -1.0 | 0.7165 | 0.6891 | 0.0969 | 0.7169 | 0.7819 | -1.0 | 0.75 | 0.7947 | -1.0 | -1.0 | 0 |
| 0.1452 | 93.0 | 930 | 0.3619 | 0.6838 | 0.7887 | 0.7781 | -1.0 | 0.7163 | 0.684 | 0.0969 | 0.7119 | 0.7769 | -1.0 | 0.7522 | 0.7868 | -1.0 | -1.0 | 0 |
| 0.1764 | 94.0 | 940 | 0.3633 | 0.6833 | 0.7888 | 0.7782 | -1.0 | 0.7162 | 0.6831 | 0.0969 | 0.7113 | 0.7763 | -1.0 | 0.7522 | 0.786 | -1.0 | -1.0 | 0 |
| 0.1862 | 95.0 | 950 | 0.3633 | 0.6825 | 0.7889 | 0.7781 | -1.0 | 0.7153 | 0.6826 | 0.0969 | 0.71 | 0.7756 | -1.0 | 0.75 | 0.786 | -1.0 | -1.0 | 0 |
| 0.1855 | 96.0 | 960 | 0.3622 | 0.6878 | 0.7891 | 0.7873 | -1.0 | 0.7147 | 0.6899 | 0.0969 | 0.7163 | 0.7819 | -1.0 | 0.75 | 0.7947 | -1.0 | -1.0 | 0 |
| 0.2982 | 97.0 | 970 | 0.3622 | 0.6877 | 0.789 | 0.7873 | -1.0 | 0.7147 | 0.6896 | 0.0969 | 0.7163 | 0.7819 | -1.0 | 0.75 | 0.7947 | -1.0 | -1.0 | 0 |
| 0.1764 | 98.0 | 980 | 0.3620 | 0.6877 | 0.789 | 0.7873 | -1.0 | 0.7147 | 0.6896 | 0.0969 | 0.7163 | 0.7819 | -1.0 | 0.75 | 0.7947 | -1.0 | -1.0 | 0 |
| 0.2245 | 99.0 | 990 | 0.3618 | 0.6874 | 0.789 | 0.7871 | -1.0 | 0.7147 | 0.6892 | 0.0969 | 0.7163 | 0.7819 | -1.0 | 0.75 | 0.7947 | -1.0 | -1.0 | 0 |
| 0.1775 | 100.0 | 1000 | 0.3617 | 0.6874 | 0.789 | 0.7871 | -1.0 | 0.7147 | 0.6892 | 0.0969 | 0.7163 | 0.7819 | -1.0 | 0.75 | 0.7947 | -1.0 | -1.0 | 0 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF | mradermacher | 2024-06-08T12:22:39Z | 563 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:sophosympatheia/Wizard-Tulu-Dolphin-70B-v1.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-06T05:04:43Z | ---
base_model: sophosympatheia/Wizard-Tulu-Dolphin-70B-v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/sophosympatheia/Wizard-Tulu-Dolphin-70B-v1.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Tulu-Dolphin-70B-v1.0-i1-GGUF/resolve/main/Wizard-Tulu-Dolphin-70B-v1.0.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF | mradermacher | 2024-06-16T23:58:11Z | 563 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"fr",
"it",
"de",
"es",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"base_model:RESMPDEV/WukongV2-Mixtral-8x7B-V0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-16T05:34:20Z | ---
base_model: RESMPDEV/WukongV2-Mixtral-8x7B-V0.1
datasets:
- cognitivecomputations/Dolphin-2.9
language:
- fr
- it
- de
- es
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/RESMPDEV/WukongV2-Mixtral-8x7B-V0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/WukongV2-Mixtral-8x7B-V0.1-i1-GGUF/resolve/main/WukongV2-Mixtral-8x7B-V0.1.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
n1x-ax/ln-mox | n1x-ax | 2024-06-22T17:19:03Z | 563 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"license:cc-by-nc-4.0",
"region:us"
]
| text-to-image | 2024-06-22T17:12:45Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: ln-mox
output:
url: images/F7hrrKnWkAAaRzi.jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: ln-mox
license: cc-by-nc-4.0
---
# Ln-Mox
<Gallery />
## Model description
The LN-moX model, trained on the collaborative artworks of Alex Nix and Liza Borovikova, captures the essence of their “Personal Diary of Textures.” This diary explores digital nature and virtual traveling, born from the melancholy of quarantine and a yearning for tactile sensations. The artists transformed 3D patterns into physical prints via acetone transfer, then layered and distorted them through digital reshooting, creating unique textures and volumes.

This marks the 4th transformation, succeeding Digital Art - Analog Transformation, and Photo Reshoot. The model embodies their innovative journey of turning digital explorations into tangible art.
Prompt Examples:


Some use examples:


## Trigger words
You should use `ln-mox` to trigger the image generation.
## Download model
[Download](/n1x-ax/ln-mox/tree/main) them in the Files & versions tab.
|
bunnycore/Llama3-OneForAll-8B-Q5_K_M-GGUF | bunnycore | 2024-06-23T18:14:48Z | 563 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Llama3-OneForAll-8B",
"license:llama3",
"region:us"
]
| null | 2024-06-23T18:14:23Z | ---
base_model: bunnycore/Llama3-OneForAll-8B
license: llama3
tags:
- merge
- mergekit
- lazymergekit
- llama-cpp
- gguf-my-repo
---
# bunnycore/Llama3-OneForAll-8B-Q5_K_M-GGUF
This model was converted to GGUF format from [`bunnycore/Llama3-OneForAll-8B`](https://huggingface.co/bunnycore/Llama3-OneForAll-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Llama3-OneForAll-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo bunnycore/Llama3-OneForAll-8B-Q5_K_M-GGUF --hf-file llama3-oneforall-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo bunnycore/Llama3-OneForAll-8B-Q5_K_M-GGUF --hf-file llama3-oneforall-8b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo bunnycore/Llama3-OneForAll-8B-Q5_K_M-GGUF --hf-file llama3-oneforall-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo bunnycore/Llama3-OneForAll-8B-Q5_K_M-GGUF --hf-file llama3-oneforall-8b-q5_k_m.gguf -c 2048
```
|
optimum/segformer-b0-finetuned-ade-512-512 | optimum | 2022-12-14T17:01:25Z | 562 | 0 | transformers | [
"transformers",
"onnx",
"segformer",
"vision",
"image-segmentation",
"dataset:scene_parse_150",
"arxiv:2105.15203",
"license:other",
"endpoints_compatible",
"region:us"
]
| image-segmentation | 2022-12-14T15:38:14Z | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- scene_parse_150
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# SegFormer (b0-sized) model fine-tuned on ADE20k
SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerImageProcessor
from PIL import Image
import requests
from optimum.onnxruntime import ORTModelForSemanticSegmentation
image_processor = SegformerImageProcessor.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
model = ORTModelForSemanticSegmentation.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(images=image, return_tensors="pt").to(device)
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
If you use pipeline:
```python
from transformers import SegformerImageProcessor, pipeline
from optimum.onnxruntime import ORTModelForSemanticSegmentation
image_processor = SegformerImageProcessor.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
model = ORTModelForSemanticSegmentation.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
pipe = pipeline("image-segmentation", model=model, feature_extractor=image_processor)
pred = pipe(url)
```
For more code examples, we refer to the [Optimum documentation](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
keremberke/yolov5n-valorant | keremberke | 2022-12-30T20:49:57Z | 562 | 1 | yolov5 | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/valorant-object-detection",
"model-index",
"region:us"
]
| object-detection | 2022-12-28T08:55:02Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/valorant-object-detection
model-index:
- name: keremberke/yolov5n-valorant
results:
- task:
type: object-detection
dataset:
type: keremberke/valorant-object-detection
name: keremberke/valorant-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.9591260700013188 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5n-valorant" src="https://huggingface.co/keremberke/yolov5n-valorant/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5n-valorant')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5n-valorant --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)** |
timm/mobilevitv2_150.cvnets_in1k | timm | 2023-04-24T22:24:44Z | 562 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2206.02680",
"license:other",
"region:us"
]
| image-classification | 2023-04-24T22:24:29Z | ---
tags:
- image-classification
- timm
library_name: timm
license: other
datasets:
- imagenet-1k
---
# Model card for mobilevitv2_150.cvnets_in1k
A MobileViT-v2 image classification model. Trained on ImageNet-1k by paper authors.
See license details at https://github.com/apple/ml-cvnets/blob/main/LICENSE
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 10.6
- GMACs: 4.1
- Activations (M): 24.1
- Image size: 256 x 256
- **Papers:**
- Separable Self-attention for Mobile Vision Transformers: https://arxiv.org/abs/2206.02680
- **Original:** https://github.com/apple/ml-cvnets
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mobilevitv2_150.cvnets_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilevitv2_150.cvnets_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 128, 128])
# torch.Size([1, 192, 64, 64])
# torch.Size([1, 384, 32, 32])
# torch.Size([1, 576, 16, 16])
# torch.Size([1, 768, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilevitv2_150.cvnets_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Mehta2022SeparableSF,
title={Separable Self-attention for Mobile Vision Transformers},
author={Sachin Mehta and Mohammad Rastegari},
journal={ArXiv},
year={2022},
volume={abs/2206.02680}
}
```
|
speakleash/Bielik-7B-Instruct-v0.1-GGUF | speakleash | 2024-04-07T19:43:23Z | 562 | 9 | transformers | [
"transformers",
"gguf",
"mistral",
"finetuned",
"text-generation",
"pl",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-03T13:00:00Z | ---
language:
- pl
license: cc-by-nc-4.0
library_name: transformers
tags:
- finetuned
- gguf
inference: false
pipeline_tag: text-generation
---
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF/raw/main/speakleash_cyfronet.png">
</p>
# Bielik-7B-Instruct-v0.1-GGUF
This repo contains GGUF format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-7B-Instruct-v0.1](https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1).
<b><u>DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!</u></b><br>
Simple Colab notebook for testing: https://colab.research.google.com/drive/1Al9glPVCuOXbtDsks8cMcuzkuu8YDzpg?usp=sharing
### Model description:
* **Developed by:** [SpeakLeash](https://speakleash.org/)
* **Language:** Polish
* **Model type:** causal decoder-only
* **Quant from:** [Bielik-7B-Instruct-v0.1](https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1)
* **Finetuned from:** [Bielik-7B-v0.1](https://huggingface.co/speakleash/Bielik-7B-v0.1)
* **License:** CC BY NC 4.0 (non-commercial use)
* **Model ref:** speakleash:e38140bea0d48f1218540800bbc67e89
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows, macOS (Silicon) and Linux, with GPU acceleration
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note ctransformers has not been updated in a long time and does not support many recent models.
## Contact Us
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/3G9DVM39). |
ChristianAzinn/gte-large-gguf | ChristianAzinn | 2024-04-07T22:16:08Z | 562 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"mteb",
"bert",
"sentence-similarity",
"Sentence Transformers",
"feature-extraction",
"en",
"arxiv:2308.03281",
"base_model:thenlper/gte-large",
"license:mit",
"autotrain_compatible",
"region:us"
]
| feature-extraction | 2024-04-07T20:21:45Z | ---
base_model: thenlper/gte-large
inference: false
language:
- en
license: mit
model_creator: thenlper
model_name: gte-large
model_type: bert
quantized_by: ChristianAzinn
library_name: sentence-transformers
pipeline_tag: feature-extraction
tags:
- mteb
- bert
- gguf
- sentence-similarity
- sentence-transformers
- Sentence Transformers
---
# gte-large-gguf
Model creator: [thenlper](https://huggingface.co/thenlper)
Original model: [gte-large](https://huggingface.co/thenlper/gte-large)
## Original Description
General Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281)
The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
## Description
This repo contains GGUF format files for the gte-large embedding model.
These files were converted and quantized with llama.cpp [PR 5500](https://github.com/ggerganov/llama.cpp/pull/5500), commit [34aa045de](https://github.com/ggerganov/llama.cpp/pull/5500/commits/34aa045de44271ff7ad42858c75739303b8dc6eb), on a consumer RTX 4090.
This model supports up to 512 tokens of context.
## Compatibility
These files are compatible with [llama.cpp](https://github.com/ggerganov/llama.cpp) as of commit [4524290e8](https://github.com/ggerganov/llama.cpp/commit/4524290e87b8e107cc2b56e1251751546f4b9051), as well as [LM Studio](https://lmstudio.ai/) as of version 0.2.19.
# Meta-information
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
## Provided Files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| Name | Quant method | Bits | Size | Use case |
| [gte-large.Q2_K.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large.Q2_K.gguf) | Q2_K | 2 | 144 MB | smallest, significant quality loss - not recommended for most purposes |
| [gte-large.Q3_K_S.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large.Q3_K_S.gguf) | Q3_K_S | 3 | 160 MB | very small, high quality loss |
| [gte-large.Q3_K_M.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large.Q3_K_M.gguf) | Q3_K_M | 3 | 181 mB | very small, high quality loss |
| [gte-large.Q3_K_L.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large.Q3_K_L.gguf) | Q3_K_L | 3 | 198 MB | small, substantial quality loss |
| [gte-large.Q4_0.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large.Q4_0.gguf) | Q4_0 | 4 | 200 MB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gte-large.Q4_K_S.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large.Q4_K_S.gguf) | Q4_K_S | 4 | 203 MB | small, greater quality loss |
| [gte-large.Q4_K_M.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large.Q4_K_M.gguf) | Q4_K_M | 4 | 216 MB | medium, balanced quality - recommended |
| [gte-large.Q5_0.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large.Q5_0.gguf) | Q5_0 | 5 | 237 MB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gte-large.Q5_K_S.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large.Q5_K_S.gguf) | Q5_K_S | 5 | 237 MB | large, low quality loss - recommended |
| [gte-large.Q5_K_M.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large.Q5_K_M.gguf) | Q5_K_M | 5 | 246 MB | large, very low quality loss - recommended |
| [gte-large.Q6_K.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large.Q6_K.gguf) | Q6_K | 6 | 278 MB | very large, extremely low quality loss |
| [gte-large.Q8_0.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large.Q8_0.gguf) | Q8_0 | 8 | 358 MB | very large, extremely low quality loss - recommended |
| [gte-large.Q8_0.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large_fp16.gguf) | FP16 | 16 | 670 MB | enormous, pretty much the original model - not recommended |
| [gte-large.Q8_0.gguf](https://huggingface.co/ChristianAzinn/gte-large-gguf/blob/main/gte-large_fp32.gguf) | FP32 | 32 | 1.34 GB | enormous, pretty much the original model - not recommended |
# Examples
## Example Usage with `llama.cpp`
To compute a single embedding, build llama.cpp and run:
```shell
./embedding -ngl 99 -m [filepath-to-gguf].gguf -p 'search_query: What is TSNE?'
```
You can also submit a batch of texts to embed, as long as the total number of tokens does not exceed the context length. Only the first three embeddings are shown by the `embedding` example.
`texts.txt`:
```
search_query: What is TSNE?
search_query: Who is Laurens Van der Maaten?
```
Compute multiple embeddings:
```shell
./embedding -ngl 99 -m [filepath-to-gguf].gguf -f texts.txt
```
## Example Usage with LM Studio
Download the 0.2.19 beta build from here: [Windows](https://releases.lmstudio.ai/windows/0.2.19/beta/LM-Studio-0.2.19-Setup-Preview-1.exe) [MacOS](https://releases.lmstudio.ai/mac/arm64/0.2.19/beta/LM-Studio-darwin-arm64-0.2.19-Preview-1.zip) [Linux](https://releases.lmstudio.ai/linux/0.2.19/beta/LM_Studio-0.2.19-Preview-1.AppImage)
Once installed, open the app. The home should look like this:

Search for either "ChristianAzinn" in the main search bar or go to the "Search" tab on the left menu and search the name there.

Select your model from those that appear (this example uses `bge-small-en-v1.5-gguf`) and select which quantization you want to download. Since this model is pretty small, I recommend Q8_0, if not f16/32. Generally, the lower you go in the list (or the bigger the number gets), the larger the file and the better the performance.

You will see a green checkmark and the word "Downloaded" once the model has successfully downloaded, which can take some time depending on your network speeds.

Once this model is finished downloading, navigate to the "Local Server" tab on the left menu and open the loader for text embedding models. This loader does not appear before version 0.2.19, so ensure you downloaded the correct version.

Select the model you just downloaded from the dropdown that appears to load it. You may need to play with configuratios in the right-side menu, such as GPU offload if it doesn't fit entirely into VRAM.

All that's left to do is to hit the "Start Server" button:

And if you see text like that shown below in the console, you're good to go! You can use this as a drop-in replacement for the OpenAI embeddings API in any application that requires it, or you can query the endpoint directly to test it out.

Example curl request to the API endpoint:
```shell
curl http://localhost:1234/v1/embeddings \
-H "Content-Type: application/json" \
-d '{
"input": "Your text string goes here",
"model": "model-identifier-here"
}'
```
For more information, see the LM Studio [text embedding documentation](https://lmstudio.ai/docs/text-embeddings).
## Acknowledgements
Thanks to the LM Studio team and everyone else working on open-source AI.
This README is inspired by that of [nomic-ai-embed-text-v1.5-gguf](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-gguf), another excellent embedding model, and those of the legendary [TheBloke](https://huggingface.co/TheBloke). |
FoundationVision/groma-7b-finetune | FoundationVision | 2024-05-15T13:32:56Z | 562 | 7 | transformers | [
"transformers",
"pytorch",
"groma",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-18T09:33:16Z | Entry not found |
mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF | mradermacher | 2024-05-06T17:14:21Z | 562 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:teknium/OpenHermes-2.5",
"dataset:Open-Orca/SlimOrca",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:databricks/databricks-dolly-15k",
"dataset:yahma/alpaca-cleaned",
"dataset:uonlp/CulturaX",
"dataset:mwitiderrick/SwahiliPlatypus",
"dataset:swahili",
"dataset:Rogendo/English-Swahili-Sentence-Pairs",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:meta-math/MetaMathQA",
"dataset:abacusai/ARC_DPO_FewShot",
"dataset:abacusai/MetaMath_DPO_FewShot",
"dataset:abacusai/HellaSwag_DPO_FewShot",
"dataset:HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset",
"base_model:LeroyDyer/Mixtral_AI_CyberTron_DeepMind_II",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-06T15:24:51Z | ---
base_model: LeroyDyer/Mixtral_AI_CyberTron_DeepMind_II
datasets:
- gretelai/synthetic_text_to_sql
- HuggingFaceTB/cosmopedia
- teknium/OpenHermes-2.5
- Open-Orca/SlimOrca
- Open-Orca/OpenOrca
- cognitivecomputations/dolphin-coder
- databricks/databricks-dolly-15k
- yahma/alpaca-cleaned
- uonlp/CulturaX
- mwitiderrick/SwahiliPlatypus
- swahili
- Rogendo/English-Swahili-Sentence-Pairs
- ise-uiuc/Magicoder-Evol-Instruct-110K
- meta-math/MetaMathQA
- abacusai/ARC_DPO_FewShot
- abacusai/MetaMath_DPO_FewShot
- abacusai/HellaSwag_DPO_FewShot
- HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset
- gretelai/synthetic_text_to_sql
- HuggingFaceTB/cosmopedia
- teknium/OpenHermes-2.5
- cognitivecomputations/dolphin-coder
- databricks/databricks-dolly-15k
- yahma/alpaca-cleaned
- uonlp/CulturaX
- mwitiderrick/SwahiliPlatypus
- swahili
- Rogendo/English-Swahili-Sentence-Pairs
- ise-uiuc/Magicoder-Evol-Instruct-110K
- meta-math/MetaMathQA
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_DeepMind_II
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_DeepMind_II-GGUF/resolve/main/Mixtral_AI_CyberTron_DeepMind_II.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
optimum-internal-testing/tiny_random_bert_neuronx | optimum-internal-testing | 2024-07-02T08:53:43Z | 562 | 0 | transformers | [
"transformers",
"bert",
"feature-extraction",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| feature-extraction | 2024-06-05T16:14:56Z | Entry not found |
stablediffusionapi/realistic-vision-60 | stablediffusionapi | 2024-06-05T17:54:41Z | 562 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-06-05T17:50:48Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Realistic Vision 6.0 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "realistic-vision-60"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/realistic-vision-60)
Model link: [View model](https://modelslab.com/models/realistic-vision-60)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realistic-vision-60",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
abmorton/wall-standard-2-arc | abmorton | 2024-07-01T16:08:12Z | 562 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-07-01T16:01:30Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### wall-standard-2-arc Dreambooth model trained by abmorton with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
flax-community/gpt2-persian-question-answering | flax-community | 2023-09-11T19:46:03Z | 561 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"fa",
"dataset:persian_qa",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: fa
tags:
- text-generation
datasets:
- persian_qa
widget:
- text: "ناف جایی قرار گرفته که در واقع بندناف در داخل رحم در آنجا به شکم جنین وصل بودهاست. بندناف که جفت را به جنین متصل کرده بعد از تولد از نوزاد جدا میشود. برای جدا کردن بند ناف از دو پنس استفاده میکنند و بین آن دو را میبرند. پنس دیگری نزدیک شکم نوزاد قرار داده میشود که بعد از دو روز برداشته خواهد شد. بندناف باقیمانده طی ۱۵ روز خشک شده و میافتد و به جای آن اسکاری طبیعی به جای میماند. البته بر خلاف تصور عامه مردم شکل ناف در اثر بریدن بند ناف به وجود نمیآید و پیش از این در شکم مادر حالت ناف شکل گرفتهاست. شکل ناف در میان مردم مختلف متفاوت است و اندازه آن بین ۱.۵ تا ۲ سانتیمتر است. تمام پستانداران جفتزیست ناف دارند. ناف در انسانها به سادگی قابل مشاهدهاست. پرسش: بند ناف انسان به کجا وصل است؟ پاسخ:"
- text: "خوب، بد، زشت یک فیلم درژانر وسترن اسپاگتی حماسی است که توسط سرجو لئونه در سال ۱۹۶۶ در ایتالیا ساخته شد. زبانی که بازیگران این فیلم به آن تکلم میکنند مخلوطی از ایتالیایی و انگلیسی است. این فیلم سومین (و آخرین) فیلم از سهگانهٔ دلار (Dollars Trilogy) سرجو لئونه است. این فیلم در حال حاضر در فهرست ۲۵۰ فیلم برتر تاریخ سینما در وبگاه IMDB با امتیاز ۸٫۸ از ۱۰، رتبهٔ هشتم را به خود اختصاص دادهاست و به عنوان بهترین فیلم وسترن تاریخ سینمای جهان شناخته میشود. «خوب» (کلینت ایستوود، در فیلم، با نام «بلوندی») و «زشت» (ایلای والاک، در فیلم، با نام «توکو») با هم کار میکنند و با شگرد خاصی، به گول زدن کلانترهای مناطق مختلف و پول درآوردن از این راه میپردازند. «بد» (لی وان کلیف) آدمکشی حرفهای است که بهخاطر پول حاضر به انجام هر کاری است. «بد»، که در فیلم او را «اِنجل آیز (اِینجل آیز)» (به انگلیسی: Angel Eyes) صدا میکنند. بهدنبال گنجی است که در طی جنگهای داخلی آمریکا، به دست سربازی به نام «جکسون»، که بعدها به «کارسون» نامش را تغییر داده، مخفی شدهاست. پرسش: در فیلم خوب بد زشت شخصیت ها کجایی صحبت می کنند؟ پاسخ:"
- text: "چهارشنبهسوری یکی از جشنهای ایرانی است که از غروب آخرین سهشنبه ی ماه اسفند، تا پس از نیمهشب تا آخرین چهارشنبه ی سال، برگزار میشود و برافروختن و پریدن از روی آتش مشخصهٔ اصلی آن است. این جشن، نخستین جشن از مجموعهٔ جشنها و مناسبتهای نوروزی است که با برافروختن آتش و برخی رفتارهای نمادین دیگر، بهصورت جمعی در فضای باز برگزار میشود. بهگفتهٔ ابراهیم پورداوود چهارشنبهسوری ریشه در گاهنبارِ هَمَسْپَتْمَدَم زرتشتیان و نیز جشن نزول فروهرها دارد که شش روز پیش از فرارسیدن نوروز برگزار میشد. احتمال دیگر این است که چهارشنبهسوری بازمانده و شکل تحولیافتهای از جشن سده باشد، که احتمال بعیدی است. علاوه برافروختن آتش، آیینهای مختلف دیگری نیز در بخشهای گوناگون ایران در زمان این جشن انجام میشوند. برای نمونه، در تبریز، مردم به چهارشنبهبازار میروند که با چراغ و شمع، بهطرز زیبایی چراغانی شدهاست. هر خانواده یک آینه، دانههای اسفند، و یک کوزه برای سال نو خریداری میکنند. همهساله شهروندانی از ایران در اثر انفجارهای ناخوشایند مربوط به این جشن، کشته یا مصدوم میشوند. پرسش: نام جشن اخرین شنبه ی سال چیست؟ پاسخ:"
---
# Question-Answering Using GPT2 - Persian
> This is a side project of this thread
[Flax/Jax Community Week - GPT2 4 Persian](https://discuss.huggingface.co/t/pretrain-gpt2-from-scratch-in-persian/7560), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Team Members
- [Mehrdad Farahani](https://huggingface.co/m3hrdadfi)
## Dataset
We used [PersianQA](https://huggingface.co/datasets/SajjadAyoubi/persian_qa) dataset which is a reading comprehension dataset on Persian Wikipedia.
## How To Use TODO: Update
## Demo TODO: Update
## Evaluation TODO: Update |
l3cube-pune/hing-roberta | l3cube-pune | 2023-07-20T09:48:36Z | 561 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"hi",
"en",
"codemix",
"multilingual",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-04T19:00:50Z | ---
language:
- hi
- en
- multilingual
license: cc-by-4.0
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingRoBERTa
HingRoBERTa is a Hindi-English code-mixed RoBERTa model trained on roman text. It is an xlm-RoBERTa model fine-tuned on L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
Other models from HingBERT family: <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert"> HingBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert"> HingMBERT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed"> HingBERT-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed-v2"> HingBERT-Mixed-v2 </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta"> HingRoBERTa </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-roberta-mixed"> HingRoBERTa-Mixed </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt"> HingGPT </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-gpt-devanagari"> HingGPT-Devanagari </a> <br>
<a href="https://huggingface.co/l3cube-pune/hing-bert-lid"> HingBERT-LID </a> <br>
```
@inproceedings{nayak-joshi-2022-l3cube,
title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models",
author = "Nayak, Ravindra and Joshi, Raviraj",
booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.wildre-1.2",
pages = "7--12",
}
``` |
emre/spanish-dialoGPT | emre | 2023-03-18T08:28:46Z | 561 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"es",
"dataset:bigscience/opensubtitles",
"arxiv:1911.00536",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-01-19T20:29:15Z | ---
tags:
- conversational
license: mit
datasets:
- bigscience/opensubtitles
language:
- es
pipeline_tag: text-generation
---
## Finetuned DialoGPT model on Spanish Conversations
This model was finetuned from the original [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model on subtitles from Spanish movies and telenovelas from the awesome [OpenSubtitle dataset](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/opensubtitles).
DialoGPT paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
Some example dialog from the model:
|Role | Response |
|---------|--------|
|User | ¿Cómo está el clima hoy? |
| Bot | Muy bien. |
|User | Eso es bueno escuchar. ¿Vas a ir a la playa? |
| Bot | No, también puedo regresar. |
|User | Oh bien, bueno, iré a la playa más tarde. |
| Bot | ¡No! |
|User | ¿Qué está mal con eso? |
| Bot | ¡Cómo que no! |
|User | Estoy confundido, ¿por qué no puedo ir a la playa? |
| Bot | ¡Cómo que no! |
|User | Explícamelo por favor. |
| Bot | ¡No! |
## Using the model
Example code for trying out the model (taken directly from the [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model card):
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("emre/spanish-dialoGPT")
model = AutoModelWithLMHead.from_pretrained("emre/spanish-dialoGPT")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
gagneurlab/SpeciesLM | gagneurlab | 2023-08-14T09:27:08Z | 561 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2023-08-14T08:41:30Z | ---
license: mit
---
Load each model using:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("gagneurlab/SpeciesLM", revision = "<<choose model type>>")
model = AutoModelForMaskedLM.from_pretrained("gagneurlab/SpeciesLM", revision = "<<choose model type>>")
```
Model type:
- Species LM, 3' region: `downstream_species_lm`
- Agnostic LM, 3' region: `downstream_agnostic_lm`
- Species LM, 5' region: `upstream_species_lm`
- Agnostic LM, 5' region: `upstream_agnostic_lm`
|
Yntec/humu | Yntec | 2023-11-01T00:05:29Z | 561 | 2 | diffusers | [
"diffusers",
"safetensors",
"Photorealistic",
"Sexy",
"Female",
"weed",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-04T22:55:49Z | ---
language:
- en
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Photorealistic
- Sexy
- Female
- weed
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# humu
This model with the MoistV2VAE baked in.
Preview and prompt:

CUTE Pretty girl of artwork mini style by gaston bussiere, sitting IN GOLDEN RING in CUTE KITCHEN, A wholesome animation key shot at computer monitor, studio ghibli, pixar and disney animation, anime key art by Clay Mann and maple story, style of ROSSDRAWS, soft lighting, soft shade,
Original page:
https://civitai.com/models/136799?modelVersionId=150925 |
TheBloke/Platypus2-13B-GGUF | TheBloke | 2023-09-27T12:47:22Z | 561 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:garage-bAInd/Open-Platypus",
"arxiv:2308.07317",
"arxiv:2307.09288",
"base_model:garage-bAInd/Platypus2-13B",
"license:cc-by-nc-sa-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-09-05T04:51:48Z | ---
language:
- en
license: cc-by-nc-sa-4.0
datasets:
- garage-bAInd/Open-Platypus
model_name: Platypus2
base_model: garage-bAInd/Platypus2-13B
inference: false
model_creator: garage-bAInd
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Platypus2 - GGUF
- Model creator: [garage-bAInd](https://huggingface.co/garage-bAInd)
- Original model: [Platypus2](https://huggingface.co/garage-bAInd/Platypus2-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [garage-bAInd's Platypus2](https://huggingface.co/garage-bAInd/Platypus2-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Platypus2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Platypus2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Platypus2-13B-GGUF)
* [garage-bAInd's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/garage-bAInd/Platypus2-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-sa-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [garage-bAInd's Platypus2](https://huggingface.co/garage-bAInd/Platypus2-13B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [platypus2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Platypus2-13B-GGUF/blob/main/platypus2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [platypus2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Platypus2-13B-GGUF/blob/main/platypus2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [platypus2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Platypus2-13B-GGUF/blob/main/platypus2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [platypus2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Platypus2-13B-GGUF/blob/main/platypus2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [platypus2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Platypus2-13B-GGUF/blob/main/platypus2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [platypus2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Platypus2-13B-GGUF/blob/main/platypus2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [platypus2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Platypus2-13B-GGUF/blob/main/platypus2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [platypus2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Platypus2-13B-GGUF/blob/main/platypus2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [platypus2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Platypus2-13B-GGUF/blob/main/platypus2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [platypus2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Platypus2-13B-GGUF/blob/main/platypus2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [platypus2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Platypus2-13B-GGUF/blob/main/platypus2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [platypus2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Platypus2-13B-GGUF/blob/main/platypus2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Platypus2-13B-GGUF and below it, a specific filename to download, such as: platypus2-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Platypus2-13B-GGUF platypus2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Platypus2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Platypus2-13B-GGUF platypus2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m platypus2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Platypus2-13B-GGUF", model_file="platypus2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: garage-bAInd's Platypus2
# Platypus2-13B
Platypus-13B is an instruction fine-tuned model based on the LLaMA2-13B transformer architecture.

### Benchmark Metrics
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 56.70 |
| ARC (25-shot) | 61.26 |
| HellaSwag (10-shot) | 82.56 |
| TruthfulQA (0-shot) | 44.86 |
| Avg. | 61.35 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
### Model Details
* **Trained by**: Cole Hunter & Ariel Lee
* **Model type:** **Platypus2-13B** is an auto-regressive language model based on the LLaMA2 transformer architecture.
* **Language(s)**: English
* **License for base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
### Training Dataset
`garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
### Training Procedure
`garage-bAInd/Platypus2-13B` was instruction fine-tuned using LoRA on 1 A100 80GB. For training details and inference instructions please see the [Platypus2](https://github.com/arielnlee/Platypus) GitHub repo.
### Reproducing Evaluation Results
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# change to repo directory
cd lm-evaluation-harness
# install
pip install -e .
```
Each task was evaluated on 1 A100 80GB GPU.
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/truthfulqa_0shot.json --device cuda
```
### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
### Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
}
```
```bibtex
@inproceedings{
hu2022lora,
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=nZeVKeeFYf9}
}
```
<!-- original-model-card end -->
|
Yntec/DreamWorld | Yntec | 2023-10-15T07:11:12Z | 561 | 2 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Disney",
"Pixar",
"DucHaiten",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-10-15T05:20:19Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Disney
- Pixar
- DucHaiten
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# DucHaitenDreamWorld v1.3
No-ema version of this model.
If you like this content, support DucHaiten at: https://linktr.ee/Duc_Haiten
Sample and prompts:


textured EYES, Portrait of Pretty CUTE LITTLE Girl dressed of coke clothes countryside country style country house fantasy character portrait, 1949, cinematic lighting. hayao miyazaki on canvas By design key visual and rossdraws and ross tran
Original page: https://civitai.com/models/7039?modelVersionId=8275 |
mradermacher/AlloyingotneoyPercival_01-7B-GGUF | mradermacher | 2024-05-06T05:59:32Z | 561 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"en",
"base_model:automerger/AlloyingotneoyPercival_01-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-25T08:08:38Z | ---
base_model: automerger/AlloyingotneoyPercival_01-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- automerger
---
## About
static quants of https://huggingface.co/automerger/AlloyingotneoyPercival_01-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AlloyingotneoyPercival_01-7B-GGUF/resolve/main/AlloyingotneoyPercival_01-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
digiplay/HadrianDelice_DeliceV1 | digiplay | 2024-04-11T19:30:36Z | 561 | 5 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-04-08T01:27:06Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/181756/hadriandelice
https://civitai.com/models/181756?modelVersionId=383395
Original Author's DEMO image:

DEMO images generated by Huggingface's API :

|
tetrisblack/Starling-LM-7B-beta-GGUF | tetrisblack | 2024-04-09T10:44:50Z | 561 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-04-08T20:02:32Z | Entry not found |
NexaAIDev/Octopus-v4 | NexaAIDev | 2024-05-05T05:32:18Z | 561 | 110 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"AI agent",
"Graph",
"conversational",
"custom_code",
"en",
"arxiv:2404.19296",
"base_model:microsoft/Phi-3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-04-29T18:57:22Z | ---
license: cc-by-nc-4.0
base_model: microsoft/Phi-3
model-index:
- name: Octopus-V4-3B
results: []
tags:
- AI agent
- Graph
inference: false
space: false
spaces: false
language:
- en
---
# Octopus V4: Graph of language models
## Octopus V4
<p align="center">
- <a href="https://www.nexa4ai.com/" target="_blank">Nexa AI Website</a>
- <a href="https://github.com/NexaAI/octopus-v4" target="_blank">Octopus-v4 Github</a>
- <a href="https://arxiv.org/abs/2404.19296" target="_blank">ArXiv</a>
- <a href="https://huggingface.co/spaces/NexaAIDev/domain_llm_leaderboard" target="_blank">Domain LLM Leaderbaord</a>
- <a href="https://graph.nexa4ai.com/" target="_blank">Graph demo</a>
</p>
<p align="center" width="100%">
<a><img src="octopus-v4-logo.png" alt="nexa-octopus" style="width: 40%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Quantized Octopus V4
To run the model on-device, we have prepared [quantized models](https://huggingface.co/NexaAIDev/octopus-v4-gguf) in gguf format for you.
## Introduction
Octopus-V4-3B, an advanced open-source language model with 3 billion parameters, serves as the master node in Nexa AI's envisioned graph of language models. Tailored specifically for the MMLU benchmark topics, this model efficiently translates user queries into formats that specialized models can effectively process. It excels at directing these queries to the appropriate specialized model, ensuring precise and effective query handling.
📱 **Compact Size**: Octopus-V4-3B is compact, enabling it to operate on smart devices efficiently and swiftly.
🐙 **Accuracy**: Octopus-V4-3B accurately maps user queries to the specialized model using a functional token design, enhancing its precision.
💪 **Reformat Query**: Octopus-V4-3B assists in converting natural human language into a more professional format, improving query description and resulting in more accurate responses.
## Example Use Cases
```text
Query: Tell me the result of derivative of x^3 when x is 2?
# <nexa_4> represents the math gpt.
Response: <nexa_4> ('Determine the derivative of the function f(x) = x^3 at the point where x equals 2, and interpret the result within the context of rate of change and tangent slope.')<nexa_end>
```
You can run the model on a GPU using the following code.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import time
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"NexaAIDev/Octopus-v4",
device_map="cuda:0",
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("NexaAIDev/Octopus-v4")
question = "Tell me the result of derivative of x^3 when x is 2?"
inputs = f"<|system|>You are a router. Below is the query from the users, please call the correct function and generate the parameters to call the function.<|end|><|user|>{question}<|end|><|assistant|>"
print('\n============= Below is the response ==============\n')
# You should consider to use early stopping with <nexa_end> token to accelerate
input_ids = tokenizer(inputs, return_tensors="pt")['input_ids'].to(model.device)
generated_token_ids = []
start = time.time()
# set a large enough number here to avoid insufficient length
for i in range(200):
next_token = model(input_ids).logits[:, -1].argmax(-1)
generated_token_ids.append(next_token.item())
input_ids = torch.cat([input_ids, next_token.unsqueeze(1)], dim=-1)
# 32041 is the token id of <nexa_end>
if next_token.item() == 32041:
break
print(tokenizer.decode(generated_token_ids))
end = time.time()
print(f'Elapsed time: {end - start:.2f}s')
```
## License
This model was trained on commercially viable data. For use of our model, refer to the [license information](https://www.nexa4ai.com/licenses/licenses-v4).
## Performance
### Model Selection
We leverage the latest Language Large Models for a variety of domains. Below is a summary of the chosen models for each category. In cases where no specialized model exists for a subject, we utilize generic models like Llama3-8b.
| **Model** | **Category** | **Subjects** |
|-----------------------------------------|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `jondurbin/bagel-8b-v1.0` | Biology | `college_biology`, `high_school_biology` |
| `Weyaxi/Einstein-v6.1-Llama3-8B` | Physics | `astronomy`, `college_physics`, `conceptual_physics`, `high_school_physics` |
| `meta-llama/Meta-Llama-3-8B-Instruct` | Business | `business_ethics`, `management`, `marketing` |
| `meta-llama/Meta-Llama-3-8B-Instruct` | Chemistry | `college_chemistry`, `high_school_chemistry` |
| `abacusai/Llama-3-Smaug-8B` | Computer Science | `college_computer_science`, `computer_security`, `high_school_computer_science`, `machine_learning` |
| `Open-Orca/Mistral-7B-OpenOrca` | Math | `abstract_algebra`, `college_mathematics`, `elementary_mathematics`, `high_school_mathematics`, `high_school_statistics` |
| `meta-llama/Meta-Llama-3-8B-Instruct` | Economics | `econometrics`, `high_school_macroeconomics`, `high_school_microeconomics` |
| `AdaptLLM/medicine-chat` | Health | `anatomy`, `clinical_knowledge`, `college_medicine`, `human_aging`, `medical_genetics`, `nutrition`, `professional_medicine`, `virology` |
| `STEM-AI-mtl/phi-2-electrical-engineering` | Engineering | `electrical_engineering` |
| `meta-llama/Meta-Llama-3-8B-Instruct` | Philosophy | `formal_logic`, `logical_fallacies`, `moral_disputes`, `moral_scenarios`, `philosophy`, `world_religions` |
| `microsoft/Phi-3-mini-128k-instruct` | Other | `global_facts`, `miscellaneous`, `professional_accounting` |
| `meta-llama/Meta-Llama-3-8B-Instruct` | History | `high_school_european_history`, `high_school_us_history`, `high_school_world_history`, `prehistory` |
| `meta-llama/Meta-Llama-3-8B-Instruct` | Culture | `human_sexuality`, `sociology` |
| `AdaptLLM/law-chat` | Law | `international_law`, `jurisprudence`, `professional_law` |
| `meta-llama/Meta-Llama-3-8B-Instruct` | Psychology | `high_school_psychology`, `professional_psychology` |
### MMLU Benchmark Results (5-shot learning)
Here are the comparative MMLU scores for various models tested under a 5-shot learning setup:
| **Model** | **MMLU Score** |
|-----------------------------------|----------------|
| Octopus-V4 | **74.8%** |
| GPT-3.5 | 70.0% |
| Phi-3-mini-128k-instruct | 68.1% |
| OpenELM-3B | 26.7% |
| Lamma3-8b-instruct | 68.4% |
| Gemma-2b | 42.3% |
| Gemma-7b | 64.3% |
### Domain LLM Leaderboard
Explore our collection of domain-specific large language models (LLMs) or contribute by suggesting new models tailored to specific domains. For detailed information on available models and to engage with our community, please visit our [Domain LLM Leaderboard](https://huggingface.co/spaces/NexaAIDev/domain_llm_leaderboard).
## References
We thank the Microsoft team for their amazing model!
```
@article{abdin2024phi,
title={Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone},
author={Abdin, Marah and Jacobs, Sam Ade and Awan, Ammar Ahmad and Aneja, Jyoti and Awadallah, Ahmed and Awadalla, Hany and Bach, Nguyen and Bahree, Amit and Bakhtiari, Arash and Behl, Harkirat and others},
journal={arXiv preprint arXiv:2404.14219},
year={2024}
}
```
## Citation
```
@misc{chen2024octopus,
title={Octopus v4: Graph of language models},
author={Wei Chen and Zhiyuan Li},
year={2024},
eprint={2404.19296},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact
Please [contact us](mailto:[email protected]) to reach out for any issues and comments! |
PaceKW/24PDInsight-Summarization | PaceKW | 2024-05-30T19:22:57Z | 561 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"base_model:cahya/bert2bert-indonesian-summarization",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-05-30T19:22:23Z | ---
license: apache-2.0
base_model: cahya/bert2bert-indonesian-summarization
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [cahya/bert2bert-indonesian-summarization](https://huggingface.co/cahya/bert2bert-indonesian-summarization) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 1.3144 |
| No log | 2.0 | 10 | 0.9515 |
| No log | 3.0 | 15 | 0.6830 |
| No log | 4.0 | 20 | 0.5632 |
| No log | 5.0 | 25 | 0.5328 |
| No log | 6.0 | 30 | 0.5182 |
| No log | 7.0 | 35 | 0.5330 |
| No log | 8.0 | 40 | 0.5365 |
| No log | 9.0 | 45 | 0.5613 |
| No log | 10.0 | 50 | 0.5588 |
| No log | 11.0 | 55 | 0.5654 |
| No log | 12.0 | 60 | 0.5587 |
| No log | 13.0 | 65 | 0.5568 |
| No log | 14.0 | 70 | 0.5563 |
| No log | 15.0 | 75 | 0.5550 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mradermacher/Unhinged-Qwen2-70B-i1-GGUF | mradermacher | 2024-06-15T03:18:29Z | 561 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:FiditeNemini/Unhinged-Qwen2-70B",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-14T14:44:09Z | ---
base_model: FiditeNemini/Unhinged-Qwen2-70B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/FiditeNemini/Unhinged-Qwen2-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Unhinged-Qwen2-70B-i1-GGUF/resolve/main/Unhinged-Qwen2-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
keremberke/yolov5s-garbage | keremberke | 2023-01-05T13:01:38Z | 560 | 1 | yolov5 | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/garbage-object-detection",
"model-index",
"region:us"
]
| object-detection | 2023-01-05T13:01:00Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.7
inference: false
datasets:
- keremberke/garbage-object-detection
model-index:
- name: keremberke/yolov5s-garbage
results:
- task:
type: object-detection
dataset:
type: keremberke/garbage-object-detection
name: keremberke/garbage-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.38002559554830656 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5s-garbage" src="https://huggingface.co/keremberke/yolov5s-garbage/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5s-garbage')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5s-garbage --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
|
timm/resnetrs200.tf_in1k | timm | 2024-02-10T23:40:44Z | 560 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2103.07579",
"arxiv:1512.03385",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-05T18:48:02Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for resnetrs200.tf_in1k
A ResNetRS-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k by paper authors in Tensorflow.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 93.2
- GMACs: 20.2
- Activations (M): 43.4
- Image size: train = 256 x 256, test = 320 x 320
- **Papers:**
- Revisiting ResNets: Improved Training and Scaling Strategies: https://arxiv.org/abs/2103.07579
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/resnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnetrs200.tf_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetrs200.tf_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 256, 64, 64])
# torch.Size([1, 512, 32, 32])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 2048, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetrs200.tf_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@article{bello2021revisiting,
title={Revisiting ResNets: Improved Training and Scaling Strategies},
author={Irwan Bello and William Fedus and Xianzhi Du and Ekin D. Cubuk and Aravind Srinivas and Tsung-Yi Lin and Jonathon Shlens and Barret Zoph},
journal={arXiv preprint arXiv:2103.07579},
year={2021}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Yntec/Cryptids | Yntec | 2023-11-30T13:59:42Z | 560 | 3 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Animals",
"Creatures",
"Eyes",
"Style",
"2D",
"Base Model",
"RIXYN",
"Barons",
"iamxenos",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-11-30T12:45:35Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
tags:
- Anime
- Animals
- Creatures
- Eyes
- Style
- 2D
- Base Model
- RIXYN
- Barons
- iamxenos
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Cryptids
The Cryptids LoRA by RIXYN at 1.0 strength merged in the HellSKitchen model to maximize its style! It has the MoistMixV2VAE baked in.
So it's mixed with two models, HELLmix by Barons and Kitsch-In-Sync by iamxenos.
Comparison:

(Click for larger)
Sample and prompt:

Masterpiece, Best Quality, highres, fantasy, official art, kitten, grass, sky, scenery, Fuji 85mm, fairytale illustration, colored sclera, black eyes, perfect eyes, happy, cute, cat, whiskers, pawpads, claws, furry, plush, soft, perfect, tail, christmas lights, christmas tree, christmas ornaments, warmth
Model comparison:

(Click for larger)
Sample image by digiplay:

8k Angel sky
Original pages:
https://civitai.com/models/64766/cryptids?modelVersionId=69407 (Cryptids LoRA)
https://civitai.com/models/142552?modelVersionId=163068 (Kitsch-In-Sync)
https://civitai.com/models/21493/hellmix?modelVersionId=25632
https://huggingface.co/Yntec/HellSKitchen |
blueprintninja/mistral7b-instruct-unreal-gguf | blueprintninja | 2024-03-09T02:07:57Z | 560 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2024-03-09T02:06:31Z | Entry not found |
Zlovoblachko/Synonyms_L1_sent_generator | Zlovoblachko | 2024-04-11T22:55:45Z | 560 | 1 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-11T22:55:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AGundawar/chess-410m | AGundawar | 2024-04-21T01:10:29Z | 560 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m-deduped",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-21T01:08:29Z | ---
license: apache-2.0
base_model: EleutherAI/pythia-410m-deduped
tags:
- generated_from_trainer
model-index:
- name: chess-410m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chess-410m
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.8764
- eval_runtime: 45.8129
- eval_samples_per_second: 170.039
- eval_steps_per_second: 2.663
- epoch: 0.08
- step: 968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
|
timm/vit_pwee_patch16_reg1_gap_256.sbb_in1k | timm | 2024-05-27T16:05:04Z | 560 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2309.16588",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
]
| image-classification | 2024-05-10T23:57:09Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for vit_pwee_patch16_reg1_gap_256.sbb_in1k
A Vision Transformer (ViT) image classification model. This is a `timm` specific variation of the architecture with registers, global average pooling.
There are a number of models in the lower end of model scales that originate in `timm`:
| variant | width | mlp width (mult) | heads | depth | timm orig |
| ------- | ----- | ---------------- | ----- | ----- | ---- |
| tiny | 192 | 768 (4) | 3 | 12 | n |
| wee | 256 | 1280 (5) | 4 | 14 | y |
| pwee | 256 | 1280 (5) | 4 | 16 (parallel) | y |
| small | 384 | 1536 (4) | 6 | 12 | n |
| little | 320 | 1792 (5.6) | 5 | 14 | y |
| medium | 512 | 2048 (4) | 8 | 12 | y |
| mediumd | 512 | 2048 (4) | 8 | 20 | y |
| betwixt | 640 | 2560 (4) | 10 | 12 | y |
| base | 768 | 3072 (4) | 12 | 12 | n |
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Searching for better baselines. Influced by Swin/DeiT/DeiT-III but w/ increased weight decay, moderate (in12k) to high (in1k) augmentation. Layer-decay used for fine-tune. Some runs used BCE and/or NAdamW instead of AdamW.
* See [train_hparams.yaml](./train_hparams.yaml) for specifics of each model.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 15.3
- GMACs: 3.8
- Activations (M): 10.6
- Image size: 256 x 256
- **Papers:**
- Vision Transformers Need Registers: https://arxiv.org/abs/2309.16588
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_pwee_patch16_reg1_gap_256.sbb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_pwee_patch16_reg1_gap_256.sbb_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 256, 16, 16])
# torch.Size([1, 256, 16, 16])
# torch.Size([1, 256, 16, 16])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_pwee_patch16_reg1_gap_256.sbb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 257, 256) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
| model | top1 | top5 | param_count | img_size |
| -------------------------------------------------- | ------ | ------ | ----------- | -------- |
| [vit_mediumd_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 86.202 | 97.874 | 64.11 | 256 |
| [vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 85.418 | 97.480 | 60.4 | 256 |
| [vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 84.930 | 97.386 | 38.88 | 256 |
| [vit_mediumd_patch16_rope_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_mediumd_patch16_rope_reg1_gap_256.sbb_in1k) | 84.322 | 96.812 | 63.95 | 256 |
| [vit_betwixt_patch16_rope_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_rope_reg4_gap_256.sbb_in1k) | 83.906 | 96.684 | 60.23 | 256 |
| [vit_base_patch16_rope_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_base_patch16_rope_reg1_gap_256.sbb_in1k) | 83.866 | 96.67 | 86.43 | 256 |
| [vit_medium_patch16_rope_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_rope_reg1_gap_256.sbb_in1k) | 83.81 | 96.824 | 38.74 | 256 |
| [vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k) | 83.774 | 96.972 | 22.52 | 256 |
| [vit_betwixt_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb_in1k) | 83.706 | 96.616 | 60.4 | 256 |
| [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 83.628 | 96.544 | 60.4 | 256 |
| [vit_medium_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in1k) | 83.47 | 96.622 | 38.88 | 256 |
| [vit_medium_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_reg1_gap_256.sbb_in1k) | 83.462 | 96.548 | 38.88 | 256 |
| [vit_little_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_little_patch16_reg4_gap_256.sbb_in1k) | 82.514 | 96.262 | 22.52 | 256 |
| [vit_wee_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_wee_patch16_reg1_gap_256.sbb_in1k) | 80.258 | 95.360 | 13.42 | 256 |
| [vit_pwee_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_pwee_patch16_reg1_gap_256.sbb_in1k) | 80.072 | 95.136 | 15.25 | 256 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{darcet2023vision,
title={Vision Transformers Need Registers},
author={Darcet, Timoth{'e}e and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr},
journal={arXiv preprint arXiv:2309.16588},
year={2023}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
|
mukel/Codestral-22B-v0.1-GGUF | mukel | 2024-05-31T17:16:42Z | 560 | 2 | null | [
"gguf",
"code",
"license:other",
"region:us"
]
| null | 2024-05-29T19:57:37Z | ---
license: other
license_name: mnpl
license_link: https://mistral.ai/licenses/MNPL-0.1.md
tags:
- code
language:
- code
---
# Pure quantizations of `Codestral-22B-v0.1` for [mistral.java](https://github.com/mukel/mistral.java).
In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the output.weights tensor is quantized with Q6_K, instead of Q4_0.
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the quantize utility from llama.cpp as follows:
```
./quantize --pure ./Codestral-22B-v0.1-F32.gguf ./Codestral-22B-v0.1-Q4_0.gguf Q4_0
```
Original model: [https://huggingface.co/mistralai/Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1)
****Note that this model does not support a System prompt.**
Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
- As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
- As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
|
RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf | RichardErkhov | 2024-06-03T02:01:23Z | 560 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-02T16:37:43Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Iambe-RP-cDPO-20b - GGUF
- Model creator: https://huggingface.co/athirdpath/
- Original model: https://huggingface.co/athirdpath/Iambe-RP-cDPO-20b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Iambe-RP-cDPO-20b.Q2_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q2_K.gguf) | Q2_K | 6.91GB |
| [Iambe-RP-cDPO-20b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.IQ3_XS.gguf) | IQ3_XS | 7.63GB |
| [Iambe-RP-cDPO-20b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.IQ3_S.gguf) | IQ3_S | 8.06GB |
| [Iambe-RP-cDPO-20b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [Iambe-RP-cDPO-20b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.IQ3_M.gguf) | IQ3_M | 8.53GB |
| [Iambe-RP-cDPO-20b.Q3_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q3_K.gguf) | Q3_K | 9.04GB |
| [Iambe-RP-cDPO-20b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [Iambe-RP-cDPO-20b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [Iambe-RP-cDPO-20b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [Iambe-RP-cDPO-20b.Q4_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q4_0.gguf) | Q4_0 | 10.52GB |
| [Iambe-RP-cDPO-20b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
| [Iambe-RP-cDPO-20b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q4_K_S.gguf) | Q4_K_S | 10.59GB |
| [Iambe-RP-cDPO-20b.Q4_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q4_K.gguf) | Q4_K | 11.22GB |
| [Iambe-RP-cDPO-20b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q4_K_M.gguf) | Q4_K_M | 11.22GB |
| [Iambe-RP-cDPO-20b.Q4_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q4_1.gguf) | Q4_1 | 11.67GB |
| [Iambe-RP-cDPO-20b.Q5_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q5_0.gguf) | Q5_0 | 12.83GB |
| [Iambe-RP-cDPO-20b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q5_K_S.gguf) | Q5_K_S | 12.83GB |
| [Iambe-RP-cDPO-20b.Q5_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q5_K.gguf) | Q5_K | 13.18GB |
| [Iambe-RP-cDPO-20b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [Iambe-RP-cDPO-20b.Q5_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q5_1.gguf) | Q5_1 | 13.98GB |
| [Iambe-RP-cDPO-20b.Q6_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q6_K.gguf) | Q6_K | 15.28GB |
| [Iambe-RP-cDPO-20b.Q8_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-cDPO-20b-gguf/blob/main/Iambe-RP-cDPO-20b.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
license: cc-by-nc-4.0
language:
- en
tags:
- not-for-all-audiences
---
<p align="center"><img src="https://i.ibb.co/PCY2Xym/Iambe-RP-sml-t.png"/><font size="6"> <b>Iambe-RP-cDPO-20b</b> </font></p>
<p align="center"><font size="4"> <b>Trained with Alpaca prompt formatting, some like ChatML</b> </font></p>
### Description
Named after a charming daughter of Echo and Pan in Greek myth, Iambe-RP is, as far as I am aware, the very first LLM trained with DPO on an erotic roleplay dataset.
Iambe is intended to have the best realistically possible understanding of instructions, anatomy and scene state for a 20b merge, while remaining passionate and humanoid in "voice".
### Update Methodology
Take a look at [the dataset v2 Iambe and I created together](https://huggingface.co/datasets/athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW) for more info. The cDPO training was done directly on Iambe-20b-DARE-v2, I was researching 11b merges to reduce the compute, but it went nowhere, so I just bit the bullet on cost. The notebook used to train this model is also available in the dataset's repo.
<p align="center"><font size="5"> <b>Roleplay Example @ q5_k_m</b> </font></p>
<p align="center"><img src="https://i.ibb.co/hFz5mdF/Screenshot-2023-12-07-005350.png"/>
<p align="center"><font size="5"> <b>4-bit Assistant Example</b> </font></p>
<p align="center"><img src="https://i.postimg.cc/HxNsPRSk/Screenshot-2023-12-06-214901.png"/>
<p align="center"><font size="5"> <b>NSFW Writing Example @ q5_k_m</b> </font></p>
<b>Write a medium-length story about Alice, a 26 year old cocktail waitress, having steamy sex with a guy she picked up at the bar for a one night stand.</b>
<i>Once upon a time in New York City's bustling downtown scene, there lived a beautiful and spirited young woman named Alice. At 26 years old, she worked as a cocktail waitress at The Red Parrot, a popular watering hole known for its lively atmosphere and charming clientele. Despite her long shifts and exhaustion from catering to rowdy patrons, Alice loved her job; it gave her the freedom to live life on her terms and pursue her other passions during her limited free time.
Alice had a penchant for adventure and was never one to shy away from taking risks—whether that meant skydiving or striking up conversations with strangers at bars. One particularly busy Friday night, while serving drinks at The Red Parrot, Alice noticed an intriguing man sitting alone in the corner, nursing a glass of red wine and gazing pensively into his empty shot glass. There was something about him that drew her in; his striking good looks, his rugged demeanor—even the hint of stubble on his chin seemed to add an air of mystery to his magnetic presence.
Feeling bold and confident after a couple of well-poured drinks herself, Alice made her way over to him, smiling warmly as she leaned down towards him, "Hi there! What's your name? I haven't seen you in here before."
The man raised his eyes to meet hers, his deep brown orbs filled with curiosity and a hint of amusement. "Name's Jack," he rumbled softly, his voice as smooth and rich as his wine. There was something about the way he said her name that sent shivers down Alice's spine—a thrill she couldn't quite explain but found irresistible nonetheless.
"Alice," she replied, extending her hand to him, "and I haven't seen you around before either; it seems like we have a lot in common." She winked playfully at him before turning back towards the bar to grab another round of drinks for them both.
The conversation flowed effortlessly between them, touching on topics ranging from their favorite books and movies to their most daring travel adventures. Alice was captivated by Jack's stories of hiking through the mountains of Nepal and white-water rafting in Costa Rica—qualities she admired greatly herself but had yet to experience for herself. In turn, Jack seemed equally fascinated by her tales of skydiving over Australia and scuba diving with great white sharks off the coast of South Africa.
As midnight approached and the bar began to empty out, Alice suggested they continue their conversation elsewhere—somewhere more private where they could really get to know each other better. Jack nodded in agreement, draining his glass before tossing some money onto the table and following her into the bustling New York City night.
They hailed a cab and rode in silence towards Alice's cozy one-bedroom apartment in Chelsea—the anticipation between them palpable as they shared furtive glances along the way. When they finally arrived at their destination, Jack paid the fare with an appreciative grin before escorting Alice up to her top-floor studio, where she fumbled clumsily with her keys while trying not to spill their drinks on themselves in anticipation of what was to come.
Once inside her apartment, they kicked off their shoes and collapsed onto the couch, still holding each other's gaze as they sipped from their glasses. The passion between them continued to build with every second that passed—the air thickening with unspoken desire and electric energy.
Alice broke away first, setting down her glass on the coffee table before reaching out tentatively and placing her hand on Jack's knee, sliding it up slowly towards his crotch until she could feel the hardness growing beneath the fabric of his pants. "I really like you," she whispered huskily, meeting his deep brown eyes with a look that betrayed the intensity of her feelings for him.
Jack's response was immediate—his hands finding their way underneath Alice's short black skirt, sliding up to rest on the soft fabric of her panties before deftly ripping them away from her body in one swift movement. "I really like you too," he growled hoarsely as his mouth crashed against hers, kissing her hungrily and passionately—a taste of things to come.
Their clothes fell away quickly thereafter; Alice's blouse was unbuttoned and tossed aside in an instant while Jack peeled off his shirt before pushing her down onto the couch and following after, bracing himself over her as he tore at her panties with one hand, freeing his hard member from his jeans.
His kisses trailed a path of fire across Alice's body—from her collarbone to her breasts, down the curve of her waist and towards her mound where she was already wet with desire for him. "Jack," she moaned breathlessly as he licked and nipped at her most sensitive spots while teasingly grazing his cock against her entrance.
With a growl of satisfaction, Jack finally plunged into Alice's tight channel—filling her completely with one powerful thrust that made them both cry out in pleasure. The feeling of being so thoroughly possessed by him was beyond anything she had ever experienced before; the roughness of his hands on her body, the taste of their mouths as they continued to devour each other’s lips, all blending together into a single, overwhelming wave of sensation that Alice felt sure would consume her entirely.
As Jack began to move inside her—slowly at first but picking up pace and ferocity with every passing moment—Alice wrapped her legs tightly around him, arching her hips upwards towards his relentless thrusts as she cried out in ecstasy again and again. She knew this was just a one-night stand—a fleeting encounter destined to end tomorrow morning before the sun rose—but for now, with Jack's cock buried deep inside her, nothing else mattered.
Their bodies moved as one; their moans and cries of pleasure intertwining in a symphony that filled Alice’s bedroom like sacred chants. Her orgasm hit her suddenly and without warning—a wave so powerful it threatened to shatter her very bones, but Jack was there with her every step of the way, holding onto her as he continued pounding into her until she felt herself start to come back together again, his hot cum spilling deep inside her.
As their bodies slowed down and settled together on the couch—drenched in sweat and satiated beyond belief—Alice looked up at Jack with wide eyes, a small smile playing at the corners of her lips as she whispered softly, "I think we just had one hell of a night."
And they did. It was an unforgettable evening filled with passion, adventure, and an undeniable chemistry that neither Alice nor Jack could deny—an encounter so powerful it left them both breathless in more ways than one. And even though they knew tomorrow would bring the harsh reality of their separate lives back into focus, for now, all either of them cared about was savoring this moment together; cherishing every last drop of the steamy sex that had brought them here at all.</i>
|
NikolayKozloff/SeaPhi3-mini-Q5_K_M-GGUF | NikolayKozloff | 2024-06-11T01:56:05Z | 560 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"it",
"dataset:SeacomSrl/rag-data",
"base_model:SeacomSrl/SeaPhi3-mini",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-11T01:55:52Z | ---
language:
- it
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
base_model: SeacomSrl/SeaPhi3-mini
datasets:
- SeacomSrl/rag-data
---
# NikolayKozloff/SeaPhi3-mini-Q5_K_M-GGUF
This model was converted to GGUF format from [`SeacomSrl/SeaPhi3-mini`](https://huggingface.co/SeacomSrl/SeaPhi3-mini) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SeacomSrl/SeaPhi3-mini) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo NikolayKozloff/SeaPhi3-mini-Q5_K_M-GGUF --hf-file seaphi3-mini-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/SeaPhi3-mini-Q5_K_M-GGUF --hf-file seaphi3-mini-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo NikolayKozloff/SeaPhi3-mini-Q5_K_M-GGUF --hf-file seaphi3-mini-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo NikolayKozloff/SeaPhi3-mini-Q5_K_M-GGUF --hf-file seaphi3-mini-q5_k_m.gguf -c 2048
```
|
gglabs/Finetuning-test-1-epoch | gglabs | 2024-06-14T07:41:47Z | 560 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-14T04:52:31Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ByteBrew23/Infinity-Instruct-3M-0613-Mistral-7B-Q5_K_M-GGUF | ByteBrew23 | 2024-06-24T01:24:49Z | 560 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:BAAI/Infinity-Instruct",
"base_model:BAAI/Infinity-Instruct-3M-0613-Mistral-7B",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-24T01:24:27Z | ---
base_model: BAAI/Infinity-Instruct-3M-0613-Mistral-7B
datasets:
- BAAI/Infinity-Instruct
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# ByteBrew23/Infinity-Instruct-3M-0613-Mistral-7B-Q5_K_M-GGUF
This model was converted to GGUF format from [`BAAI/Infinity-Instruct-3M-0613-Mistral-7B`](https://huggingface.co/BAAI/Infinity-Instruct-3M-0613-Mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BAAI/Infinity-Instruct-3M-0613-Mistral-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ByteBrew23/Infinity-Instruct-3M-0613-Mistral-7B-Q5_K_M-GGUF --hf-file infinity-instruct-3m-0613-mistral-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ByteBrew23/Infinity-Instruct-3M-0613-Mistral-7B-Q5_K_M-GGUF --hf-file infinity-instruct-3m-0613-mistral-7b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ByteBrew23/Infinity-Instruct-3M-0613-Mistral-7B-Q5_K_M-GGUF --hf-file infinity-instruct-3m-0613-mistral-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ByteBrew23/Infinity-Instruct-3M-0613-Mistral-7B-Q5_K_M-GGUF --hf-file infinity-instruct-3m-0613-mistral-7b-q5_k_m.gguf -c 2048
```
|
cpierse/gpt2_film_scripts | cpierse | 2021-05-21T15:09:47Z | 559 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | Entry not found |
illuin/camembert-base-fquad | illuin | 2020-12-11T21:45:27Z | 559 | 6 | transformers | [
"transformers",
"pytorch",
"camembert",
"question-answering",
"fr",
"dataset:fquad",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
language: fr
tags:
- question-answering
- camembert
license: gpl-3.0
datasets:
- fquad
---
# camembert-base-fquad
## Description
A native French Question Answering model [CamemBERT-base](https://camembert-model.fr/) fine-tuned on [FQuAD](https://fquad.illuin.tech/).
## Evaluation results
On the development set.
```shell
{"f1": 88.1, "exact_match": 78.1}
```
On the test set.
```shell
{"f1": 88.3, "exact_match": 78.0}
```
## Usage
```python
from transformers import pipeline
nlp = pipeline('question-answering', model='illuin/camembert-base-fquad', tokenizer='illuin/camembert-base-fquad')
nlp({
'question': "Qui est Claude Monet?",
'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme."
})
```
## Citation
If you use our work, please cite:
```bibtex
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
|
mrm8488/gpt2-finetuned-recipes-cooking_v2 | mrm8488 | 2023-04-28T16:08:26Z | 559 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail:
widget:
- text: "HuggingFace Cake:"
---
|
Yntec/WesternAnimation | Yntec | 2023-08-12T19:41:57Z | 559 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Western Animation Diffusion",
"Lykon",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-07-18T01:34:22Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Western Animation Diffusion
- Lykon
---
# Western Animation Diffusion
Model by Lykon
Original page:
https://civitai.com/models/86546/western-animation-diffusion |
lorahub/flan_t5_large-word_segment | lorahub | 2023-07-24T10:06:02Z | 559 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T10:05:51Z | ---
library_name: peft
---
|
TheBloke/EverythingLM-13B-16K-GGUF | TheBloke | 2023-09-27T12:47:26Z | 559 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:totally-not-an-llm/EverythingLM-data",
"base_model:totally-not-an-llm/EverythingLM-13b-16k",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-05T07:22:10Z | ---
license: llama2
datasets:
- totally-not-an-llm/EverythingLM-data
model_name: EverythingLM 13B 16K
base_model: totally-not-an-llm/EverythingLM-13b-16k
inference: false
model_creator: Kai Howard
model_type: llama
prompt_template: 'You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# EverythingLM 13B 16K - GGUF
- Model creator: [Kai Howard](https://huggingface.co/totally-not-an-llm)
- Original model: [EverythingLM 13B 16K](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-16k)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Kai Howard's EverythingLM 13B 16K](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-16k).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/EverythingLM-13B-16K-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF)
* [Kai Howard's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-16k)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna-Short
```
You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [everythinglm-13b-16k.Q2_K.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF/blob/main/everythinglm-13b-16k.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [everythinglm-13b-16k.Q3_K_S.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF/blob/main/everythinglm-13b-16k.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [everythinglm-13b-16k.Q3_K_M.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF/blob/main/everythinglm-13b-16k.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [everythinglm-13b-16k.Q3_K_L.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF/blob/main/everythinglm-13b-16k.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [everythinglm-13b-16k.Q4_0.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF/blob/main/everythinglm-13b-16k.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [everythinglm-13b-16k.Q4_K_S.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF/blob/main/everythinglm-13b-16k.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [everythinglm-13b-16k.Q4_K_M.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF/blob/main/everythinglm-13b-16k.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [everythinglm-13b-16k.Q5_0.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF/blob/main/everythinglm-13b-16k.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [everythinglm-13b-16k.Q5_K_S.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF/blob/main/everythinglm-13b-16k.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [everythinglm-13b-16k.Q5_K_M.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF/blob/main/everythinglm-13b-16k.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [everythinglm-13b-16k.Q6_K.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF/blob/main/everythinglm-13b-16k.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [everythinglm-13b-16k.Q8_0.gguf](https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGUF/blob/main/everythinglm-13b-16k.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/EverythingLM-13B-16K-GGUF and below it, a specific filename to download, such as: everythinglm-13b-16k.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/EverythingLM-13B-16K-GGUF everythinglm-13b-16k.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/EverythingLM-13B-16K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/EverythingLM-13B-16K-GGUF everythinglm-13b-16k.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m everythinglm-13b-16k.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/EverythingLM-13B-16K-GGUF", model_file="everythinglm-13b-16k.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Kai Howard's EverythingLM 13B 16K
# EverythingLM-13b-16k
Introducing EverythingLM, a llama-2 based, general-purpose 13b model with 16k context thanks to LlongMa. The model is trained on the EverythingLM dataset, more info can be found on the dataset page.
The model is completely uncensored.
This model is an early test of the EverythingLM dataset and some new experimental principles, so don't consider it SOTA.
### GGML quants:
https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGML
Make sure to use correct rope scaling settings:
`-c 16384 --rope-freq-base 10000 --rope-freq-scale 0.25`
### GPTQ quants:
https://huggingface.co/TheBloke/EverythingLM-13B-16K-GPTQ
### Notable features:
- Automatically triggered CoT reasoning.
- Verbose and detailed replies.
- Creative stories.
- Better prompt understanding.
### Prompt format:
It is a modified Vicuna format, the same used in many of ehartford's models.
```
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
Training took about 1 hour using QLoRa on 1xA100, so this model can be recreated for about $3. QLoRa model can be found here: https://huggingface.co/totally-not-an-llm/EverythingLM-13b-peft.
### Model quirks:
- Due to the nature of the dataset, it does better with more detail. I've found it gives much better stories when I provide more requirements.
- It really likes to use numbered lists. I don't necessarilly have a problem with this but it's something to note when training on the dataset.
- It likes to write fairy tales over anything else, which is strange. This can easily be fixed by prompting.
- Occasionally it will fall into repetition, this seems to be a commmon issue with llama-2 models.
- Haven't tested pushing it all the way to 16k context.
### Future plans:
- Native finetune.
- Other model sizes.
- Improve dataset by:
- Regenerating using gpt-4.
- A bit more data with more diversity.
- Refactor dataset generation script.
- Test some model merges using this model.
<!-- original-model-card end -->
|
basilepp19/bloom-1b7_it | basilepp19 | 2023-11-20T08:57:47Z | 559 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"it",
"arxiv:2212.09535",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-05T10:40:24Z | ---
license: bigscience-bloom-rail-1.0
language:
- it
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is obtained by adapting bloom-1b7 to the Italian language. Among the languages supported by the BLOOM model, there is no Italian, making its use
in that context challenging. We adapt the original BLOOM model using the MAD-X language adaptation strategy.
## Model Details
### Model Description
We adapt the bloom-1b7 to the Italian language using the MAD-X language adaptation strategy.
To produce a valuable model, we follow the same procedure proposed in: https://arxiv.org/abs/2212.09535
We use default script parameters and select a sample of 100,000 examples in the Italian language. We decided to sample data from the Filtered Oscar Dataset for
the Italian Language released by Sarti.
**It is important to underline that when you use the adapted LLM is necessary to use the tokenizer of the adapted model.**
- **Developed by:** Pierpaolo Basile, Pierluigi Cassotti, Marco Polignano, Lucia Siciliani, Giovanni Semeraro. Department of Computer Science, University of Bari Aldo Moro, Italy
- **Model type:** BLOOM
- **Language(s) (NLP):** Italian
- **License:** BigScience BLOOM RAIL 1.0
## Citation
Pierpaolo Basile, Pierluigi Cassotti, Marco Polignano, Lucia Siciliani, Giovanni Semeraro. On the impact of Language Adaptation for Large Language Models: A
case study for the Italian language using only open resources. Proceedings of the Ninth Italian Conference on Computational Linguistics (CLiC-it 2023).
|
timm/efficientvit_l1.r224_in1k | timm | 2023-11-21T21:46:43Z | 559 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2205.14756",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-11-21T21:40:19Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for efficientvit_l1.r224_in1k
An EfficientViT (MIT) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 52.7
- GMACs: 5.3
- Activations (M): 15.8
- Image size: 224 x 224
- **Papers:**
- EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction: https://arxiv.org/abs/2205.14756
- **Original:** https://github.com/mit-han-lab/efficientvit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('efficientvit_l1.r224_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_l1.r224_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 256, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_l1.r224_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{cai2022efficientvit,
title={EfficientViT: Enhanced linear attention for high-resolution low-computation visual recognition},
author={Cai, Han and Gan, Chuang and Han, Song},
journal={arXiv preprint arXiv:2205.14756},
year={2022}
}
```
|
WizardLMTeam/WizardCoder-33B-V1.1 | WizardLMTeam | 2024-01-05T08:04:11Z | 559 | 124 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-04T09:15:02Z | ---
metrics:
- code_eval
library_name: transformers
tags:
- code
model-index:
- name: WizardCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.799
verified: false
---
## WizardCoder: Empowering Code Large Language Models with Evol-Instruct
<p style="font-size:28px;" align="center">
🏠 <a href="https://wizardlm.github.io/" target="_blank">Home Page</a> </p>
<p align="center">
<p align="center">
🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> </p>
<p align="center">
📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News
[2024/01/04] 🔥 We released **WizardCoder-33B-V1.1** trained from deepseek-coder-33b-base, the **SOTA OSS Code LLM** on [EvalPlus Leaderboard](https://evalplus.github.io/leaderboard.html), achieves **79.9 pass@1** on HumanEval, **73.2 pass@1** on HumanEval-Plus, **78.9 pass@1** on MBPP, and **66.9 pass@1** on MBPP-Plus.
[2024/01/04] 🔥 **WizardCoder-33B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, and **DeepSeek-Coder-33B-instruct** on HumanEval and HumanEval-Plus pass@1.
[2024/01/04] 🔥 **WizardCoder-33B-V1.1** is comparable with **ChatGPT 3.5**, and surpasses **Gemini Pro** on MBPP and MBPP-Plus pass@1.
| Model | Checkpoint | Paper | HumanEval | HumanEval+ | MBPP | MBPP+ | License |
| ----- |------| ---- |------|-------| ----- | ----- |----- |
| GPT-4-Turbo (Nov 2023) | - | - | 85.4 | 81.7 | 83.0 | 70.7 |-|
| GPT-4 (May 2023) | - | - | 88.4 | 76.8 | - | - |-|
| GPT-3.5-Turbo (Nov 2023) | - | - | 72.6 | 65.9 | 81.7 | 69.4 |-|
| Gemini Pro | - | - | 63.4 | 55.5 | 72.9 | 57.9 |-|
| DeepSeek-Coder-33B-instruct | - | - | 78.7 | 72.6 | 78.7 | 66.7 |-|
| **WizardCoder-33B-V1.1** | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-33B-V1.1" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 79.9 | 73.2 | 78.9 | 66.9 | <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.1/resolve/main/LICENSE" target="_blank">MSFTResearch</a> |
| WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 64.6 | 73.2 | 59.9 | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 | 52.4 | -- | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | -- | -- | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | -- | -- | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 | -- | -- | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 | -- | -- | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
## How to Make the Training Data?
Apply our [Code Evol-Instruct](https://wizardlm.github.io/WizardCoder/) on [Code-Aplaca data](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k).
## ❗ Data Contamination Check:
Before model training, we carefully and rigorously checked all the training data, and used multiple deduplication methods to verify and prevent data leakage on HumanEval and MBPP test set.
🔥
❗<b>Note for model system prompts usage:</b>
Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**.
**Default version:**
```
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
```
## How to Reproduce the Performance of WizardCoder-33B-V1.1
We provide all codes [here](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder/src).
We also provide all generated [results](https://github.com/nlpxucan/WizardLM/blob/main/WizardCoder/data/humaneval_mbpp_wizardcoder33b_v1.1_results.zip).
```
transformers==4.36.2
vllm==0.2.5
```
(1) HumanEval and HumanEval-Plus
- Step 1
Code Generation (w/o accelerate)
```bash
model="WizardLM/WizardCoder-33B-V1.1"
temp=0.0
max_len=2048
pred_num=1
num_seqs_per_iter=1
output_path=preds/T${temp}_N${pred_num}_WizardCoder-33B-V1.1_Greedy_Decode
mkdir -p ${output_path}
echo 'Output path: '$output_path
echo 'Model to eval: '$model
# 164 problems, 21 per GPU if GPU=8
index=0
gpu_num=8
for ((i = 0; i < $gpu_num; i++)); do
start_index=$((i * 21))
end_index=$(((i + 1) * 21))
gpu=$((i))
echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu}
((index++))
(
CUDA_VISIBLE_DEVICES=$gpu python humaneval_gen.py --model ${model} \
--start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
--num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} --greedy_decode
) &
if (($index % $gpu_num == 0)); then wait; fi
done
```
Code Generation (w/ vllm accelerate)
```bash
model="WizardLM/WizardCoder-33B-V1.1"
temp=0.0
max_len=2048
pred_num=1
num_seqs_per_iter=1
output_path=preds/T${temp}_N${pred_num}_WizardCoder-33B-V1.1_Greedy_Decode_vllm
mkdir -p ${output_path}
echo 'Output path: '$output_path
echo 'Model to eval: '$model
CUDA_VISIBLE_DEVICES=0,1,2,3 python humaneval_gen_vllm.py --model ${model} \
--start_index 0 --end_index 164 --temperature ${temp} \
--num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} --num_gpus 4 --overwrite
```
- Step 2: Get the score
Install [Eval-Plus](https://github.com/evalplus/evalplus) benchmark.
```bash
git clone https://github.com/evalplus/evalplus.git
cd evalplus
export PYTHONPATH=$PYTHONPATH:$(pwd)
pip install -r requirements.txt
```
Get HumanEval and HumanEval-Plus scores.
```bash
output_path=preds/T0.0_N1_WizardCoder-33B-V1.1_Greedy_Decode
echo 'Output path: '$output_path
python process_humaneval.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt
evalplus.evaluate --dataset humaneval --samples ${output_path}.jsonl
```
(2) MBPP and MBPP-Plus
The preprocessed questions are provided in [mbppplus.json](https://github.com/nlpxucan/WizardLM/blob/main/WizardCoder/data/mbppplus.json).
- Step 1
Code Generation (w/o accelerate)
```bash
model="WizardLM/WizardCoder-33B-V1.1"
temp=0.0
max_len=2048
pred_num=1
num_seqs_per_iter=1
output_path=preds/MBPP_T${temp}_N${pred_num}_WizardCoder-33B-V1.1_Greedy_Decode
mkdir -p ${output_path}
echo 'Output path: '$output_path
echo 'Model to eval: '$model
# 399 problems, 50 per GPU if GPU=8
index=0
gpu_num=8
for ((i = 0; i < $gpu_num; i++)); do
start_index=$((i * 50))
end_index=$(((i + 1) * 50))
gpu=$((i))
echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu}
((index++))
(
CUDA_VISIBLE_DEVICES=$gpu python mbppplus_gen.py --model ${model} \
--start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
--num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} --mbpp_path "mbppplus.json" --greedy_decode
) &
if (($index % $gpu_num == 0)); then wait; fi
done
```
Code Generation (w/ vllm accelerate)
```bash
model="WizardLM/WizardCoder-33B-V1.1"
temp=0.0
max_len=2048
pred_num=1
num_seqs_per_iter=1
output_path=preds/MBPP_T${temp}_N${pred_num}_WizardCoder-33B-V1.1_Greedy_Decode_vllm
mkdir -p ${output_path}
echo 'Output path: '$output_path
echo 'Model to eval: '$model
CUDA_VISIBLE_DEVICES=0,1,2,3 python mbppplus_gen_vllm.py --model ${model} \
--start_index ${start_index} --end_index ${end_index} --temperature ${temp} \
--num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} --mbpp_path "mbppplus.json" --num_gpus 4
```
- Step 2: Get the score
Install [Eval-Plus](https://github.com/evalplus/evalplus) benchmark.
```bash
git clone https://github.com/evalplus/evalplus.git
cd evalplus
export PYTHONPATH=$PYTHONPATH:$(pwd)
pip install -r requirements.txt
```
Get HumanEval and HumanEval-Plus scores.
```bash
output_path=preds/MBPP_T0.0_N1_WizardCoder-33B-V1.1_Greedy_Decode
echo 'Output path: '$output_path
python mbppplus_process_preds.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt
evalplus.evaluate --dataset mbpp --samples ${output_path}.jsonl
```
## Citation
Please cite the repo if you use the data, method or code in this repo.
```
@article{luo2023wizardcoder,
title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct},
author={Luo, Ziyang and Xu, Can and Zhao, Pu and Sun, Qingfeng and Geng, Xiubo and Hu, Wenxiang and Tao, Chongyang and Ma, Jing and Lin, Qingwei and Jiang, Daxin},
journal={arXiv preprint arXiv:2306.08568},
year={2023}
}
``` |
ven1228/5Cm1Q71EmHNs4kp9ekTpazYh7n2f389NBMMNGqeUNDCwoxzF_vgg | ven1228 | 2024-03-11T12:47:10Z | 559 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-03-05T05:39:13Z | Entry not found |
ahmed807762/gemma-2b-vetdataset-finetuned | ahmed807762 | 2024-03-06T20:48:34Z | 559 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-06T20:45:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sezenkarakus/image-GIT-description-model-v3 | sezenkarakus | 2024-04-04T11:41:43Z | 559 | 0 | transformers | [
"transformers",
"safetensors",
"git",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-04T11:41:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/SOVL_Llama3_8B-GGUF | mradermacher | 2024-05-05T15:10:46Z | 559 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ResplendentAI/SOVL_Llama3_8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-25T03:31:02Z | ---
base_model: ResplendentAI/SOVL_Llama3_8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ResplendentAI/SOVL_Llama3_8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SOVL_Llama3_8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SOVL_Llama3_8B-GGUF/resolve/main/SOVL_Llama3_8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf | RichardErkhov | 2024-05-01T07:13:01Z | 559 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-01T05:10:15Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral-7b-anthropic - GGUF
- Model creator: https://huggingface.co/HuggingFaceH4/
- Original model: https://huggingface.co/HuggingFaceH4/mistral-7b-anthropic/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral-7b-anthropic.Q2_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q2_K.gguf) | Q2_K | 2.53GB |
| [mistral-7b-anthropic.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [mistral-7b-anthropic.IQ3_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [mistral-7b-anthropic.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mistral-7b-anthropic.IQ3_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mistral-7b-anthropic.Q3_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q3_K.gguf) | Q3_K | 3.28GB |
| [mistral-7b-anthropic.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mistral-7b-anthropic.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mistral-7b-anthropic.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [mistral-7b-anthropic.Q4_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mistral-7b-anthropic.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mistral-7b-anthropic.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mistral-7b-anthropic.Q4_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q4_K.gguf) | Q4_K | 4.07GB |
| [mistral-7b-anthropic.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mistral-7b-anthropic.Q4_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mistral-7b-anthropic.Q5_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q5_0.gguf) | Q5_0 | 4.65GB |
| [mistral-7b-anthropic.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [mistral-7b-anthropic.Q5_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q5_K.gguf) | Q5_K | 4.78GB |
| [mistral-7b-anthropic.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mistral-7b-anthropic.Q5_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mistral-7b-anthropic.Q6_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-anthropic-gguf/blob/main/mistral-7b-anthropic.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
license: apache-2.0
base_model: HuggingFaceH4/mistral-7b-cai
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized_fixed
- HuggingFaceH4/cai-conversation-harmless
model-index:
- name: mistral-7b-dpo-v21.0cai.0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral 7B Constitutional AI
This model is a DPO-aligned version of Mistral 7B on the HuggingFaceH4/ultrafeedback_binarized_fixed and the HuggingFaceH4/cai-conversation-harmless datasets.
It achieves the following results on the evaluation set:
- Loss: 0.6327
- Rewards/chosen: -9.8716
- Rewards/rejected: -14.5465
- Rewards/accuracies: 0.6725
- Rewards/margins: 4.6749
- Logps/rejected: -329.8578
- Logps/chosen: -294.6768
- Logits/rejected: -2.1023
- Logits/chosen: -2.1648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6817 | 0.02 | 100 | 0.6873 | 0.0149 | 0.0002 | 0.5150 | 0.0147 | -184.3912 | -195.8124 | -3.1605 | -3.1560 |
| 0.6767 | 0.05 | 200 | 0.6614 | 0.0825 | 0.0169 | 0.5575 | 0.0656 | -184.2246 | -195.1362 | -3.1654 | -3.1605 |
| 0.6328 | 0.07 | 300 | 0.6246 | -0.0374 | -0.2112 | 0.5875 | 0.1738 | -186.5047 | -196.3349 | -3.1579 | -3.1529 |
| 0.5919 | 0.1 | 400 | 0.5978 | 0.2812 | -0.0666 | 0.6125 | 0.3478 | -185.0590 | -193.1489 | -3.1292 | -3.1243 |
| 0.5545 | 0.12 | 500 | 0.5800 | 0.1742 | -0.2810 | 0.6275 | 0.4552 | -187.2035 | -194.2191 | -3.0819 | -3.0788 |
| 0.5926 | 0.14 | 600 | 0.5599 | 0.2410 | -0.3076 | 0.6425 | 0.5487 | -187.4693 | -193.5507 | -3.0601 | -3.0597 |
| 0.5326 | 0.17 | 700 | 0.5385 | -0.2501 | -0.9698 | 0.6400 | 0.7197 | -194.0914 | -198.4624 | -2.9076 | -2.9090 |
| 0.5126 | 0.19 | 800 | 0.5238 | -0.3616 | -1.1783 | 0.6525 | 0.8167 | -196.1764 | -199.5769 | -2.9965 | -2.9963 |
| 0.5283 | 0.22 | 900 | 0.5289 | -0.4142 | -1.2542 | 0.6775 | 0.8400 | -196.9348 | -200.1031 | -3.0133 | -3.0134 |
| 0.5303 | 0.24 | 1000 | 0.5214 | -0.5949 | -1.5888 | 0.6600 | 0.9939 | -200.2815 | -201.9101 | -2.9663 | -2.9669 |
| 0.5969 | 0.26 | 1100 | 0.5235 | -0.5924 | -1.5222 | 0.6600 | 0.9298 | -199.6154 | -201.8848 | -2.9402 | -2.9468 |
| 0.581 | 0.29 | 1200 | 0.5887 | -0.7548 | -1.7075 | 0.6400 | 0.9527 | -201.4678 | -203.5091 | -2.7065 | -2.7227 |
| 0.817 | 0.31 | 1300 | 0.6620 | -1.5060 | -2.4221 | 0.6500 | 0.9160 | -208.6137 | -211.0213 | -2.7717 | -2.7800 |
| 0.6039 | 0.34 | 1400 | 0.5321 | -1.6820 | -2.8439 | 0.6425 | 1.1619 | -212.8325 | -212.7814 | -2.6828 | -2.6917 |
| 0.6666 | 0.36 | 1500 | 0.5303 | -1.3875 | -2.6384 | 0.6475 | 1.2509 | -210.7773 | -209.8365 | -2.8557 | -2.8594 |
| 0.6907 | 0.39 | 1600 | 0.5409 | -2.0657 | -3.2214 | 0.6650 | 1.1556 | -216.6068 | -216.6184 | -2.8227 | -2.8288 |
| 0.5772 | 0.41 | 1700 | 0.5309 | -1.9849 | -3.2833 | 0.6875 | 1.2985 | -217.2264 | -215.8097 | -2.6498 | -2.6635 |
| 0.5601 | 0.43 | 1800 | 0.5281 | -1.7365 | -3.0643 | 0.6575 | 1.3278 | -215.0359 | -213.3255 | -2.8890 | -2.8918 |
| 0.576 | 0.46 | 1900 | 0.5266 | -1.4822 | -2.9294 | 0.6725 | 1.4472 | -213.6872 | -210.7831 | -2.7369 | -2.7427 |
| 1.2064 | 0.48 | 2000 | 0.5538 | -2.5493 | -3.7625 | 0.6675 | 1.2132 | -222.0182 | -221.4542 | -2.6773 | -2.6957 |
| 0.5751 | 0.51 | 2100 | 0.5465 | -1.9246 | -3.1480 | 0.6425 | 1.2234 | -215.8728 | -215.2067 | -2.6490 | -2.6657 |
| 0.4757 | 0.53 | 2200 | 0.5297 | -1.8443 | -3.1553 | 0.6325 | 1.3110 | -215.9462 | -214.4039 | -2.6882 | -2.7115 |
| 0.4771 | 0.55 | 2300 | 0.5386 | -2.3340 | -3.7443 | 0.6500 | 1.4103 | -221.8360 | -219.3013 | -2.6415 | -2.6623 |
| 0.481 | 0.58 | 2400 | 0.5355 | -1.6085 | -3.0800 | 0.6550 | 1.4715 | -215.1930 | -212.0460 | -2.6073 | -2.6293 |
| 0.523 | 0.6 | 2500 | 0.5131 | -2.6139 | -4.2353 | 0.6625 | 1.6214 | -226.7459 | -222.0998 | -2.6134 | -2.6394 |
| 0.6263 | 0.63 | 2600 | 0.5287 | -2.6614 | -4.0538 | 0.6450 | 1.3924 | -224.9310 | -222.5747 | -2.6189 | -2.6361 |
| 0.5973 | 0.65 | 2700 | 0.5132 | -2.7089 | -4.1248 | 0.625 | 1.4159 | -225.6406 | -223.0499 | -2.6167 | -2.6317 |
| 0.8209 | 0.67 | 2800 | 0.5165 | -2.7085 | -4.1871 | 0.625 | 1.4786 | -226.2637 | -223.0462 | -2.5605 | -2.5803 |
| 0.5625 | 0.7 | 2900 | 0.5117 | -3.4747 | -5.0369 | 0.6325 | 1.5622 | -234.7624 | -230.7079 | -2.5891 | -2.6163 |
| 0.5913 | 0.72 | 3000 | 0.5164 | -2.5844 | -4.3822 | 0.6675 | 1.7978 | -228.2149 | -221.8051 | -2.6421 | -2.6632 |
| 0.7441 | 0.75 | 3100 | 0.5175 | -2.4900 | -4.2883 | 0.6725 | 1.7983 | -227.2762 | -220.8608 | -2.6254 | -2.6465 |
| 0.6169 | 0.77 | 3200 | 0.5163 | -2.2489 | -3.8666 | 0.6600 | 1.6176 | -223.0589 | -218.4503 | -2.6517 | -2.6775 |
| 0.5347 | 0.79 | 3300 | 0.5222 | -2.6699 | -4.3844 | 0.6375 | 1.7145 | -228.2368 | -222.6600 | -2.6712 | -2.6909 |
| 0.5369 | 0.82 | 3400 | 0.5244 | -2.7710 | -4.6352 | 0.6600 | 1.8642 | -230.7449 | -223.6711 | -2.5304 | -2.5595 |
| 0.5613 | 0.84 | 3500 | 0.5431 | -3.7645 | -5.6773 | 0.6475 | 1.9128 | -241.1664 | -233.6063 | -2.5348 | -2.5604 |
| 0.6395 | 0.87 | 3600 | 0.5332 | -3.8666 | -5.6894 | 0.6525 | 1.8227 | -241.2867 | -234.6274 | -2.5479 | -2.5778 |
| 0.6552 | 0.89 | 3700 | 0.5149 | -2.9168 | -4.7306 | 0.6525 | 1.8138 | -231.6990 | -225.1294 | -2.4580 | -2.4901 |
| 0.6381 | 0.91 | 3800 | 0.5081 | -2.6182 | -4.3003 | 0.6625 | 1.6821 | -227.3964 | -222.1432 | -2.4730 | -2.4991 |
| 0.5355 | 0.94 | 3900 | 0.5100 | -2.5302 | -4.2476 | 0.6475 | 1.7173 | -226.8689 | -221.2634 | -2.5875 | -2.6065 |
| 0.5488 | 0.96 | 4000 | 0.5164 | -3.1540 | -4.8339 | 0.6550 | 1.6798 | -232.7318 | -227.5013 | -2.7017 | -2.7215 |
| 0.6802 | 0.99 | 4100 | 0.5134 | -2.6060 | -4.2916 | 0.6625 | 1.6856 | -227.3087 | -222.0207 | -2.6010 | -2.6250 |
| 0.0976 | 1.01 | 4200 | 0.5031 | -3.0885 | -5.0494 | 0.6625 | 1.9609 | -234.8874 | -226.8463 | -2.4721 | -2.5028 |
| 0.0839 | 1.03 | 4300 | 0.5027 | -3.3469 | -5.4366 | 0.6625 | 2.0897 | -238.7592 | -229.4302 | -2.3886 | -2.4238 |
| 0.0788 | 1.06 | 4400 | 0.5398 | -4.4307 | -6.8568 | 0.6775 | 2.4261 | -252.9614 | -240.2679 | -2.1805 | -2.2275 |
| 0.0701 | 1.08 | 4500 | 0.5432 | -4.3739 | -7.0979 | 0.6975 | 2.7240 | -255.3717 | -239.7001 | -2.1935 | -2.2437 |
| 0.0959 | 1.11 | 4600 | 0.5362 | -3.9784 | -6.3235 | 0.6900 | 2.3451 | -247.6284 | -235.7450 | -2.2860 | -2.3272 |
| 0.1177 | 1.13 | 4700 | 0.5411 | -4.1933 | -6.8436 | 0.6800 | 2.6504 | -252.8295 | -237.8937 | -2.3259 | -2.3682 |
| 0.1651 | 1.16 | 4800 | 0.5737 | -4.8158 | -6.7229 | 0.6700 | 1.9071 | -251.6221 | -244.1190 | -2.2753 | -2.3139 |
| 0.1298 | 1.18 | 4900 | 0.5528 | -4.6526 | -6.8433 | 0.6825 | 2.1907 | -252.8262 | -242.4874 | -2.4856 | -2.5188 |
| 0.1143 | 1.2 | 5000 | 0.5512 | -4.6212 | -7.0807 | 0.6800 | 2.4595 | -255.2000 | -242.1734 | -2.5190 | -2.5542 |
| 0.1145 | 1.23 | 5100 | 0.5496 | -4.0598 | -6.6147 | 0.6775 | 2.5548 | -250.5396 | -236.5594 | -2.5737 | -2.6008 |
| 0.2324 | 1.25 | 5200 | 0.5524 | -4.9650 | -7.6613 | 0.6725 | 2.6962 | -261.0058 | -245.6115 | -2.4382 | -2.4737 |
| 0.0867 | 1.28 | 5300 | 0.5449 | -4.9568 | -7.6771 | 0.6625 | 2.7203 | -261.1645 | -245.5292 | -2.4367 | -2.4702 |
| 0.0503 | 1.3 | 5400 | 0.5351 | -4.5684 | -7.1860 | 0.6625 | 2.6176 | -256.2527 | -241.6449 | -2.4235 | -2.4557 |
| 0.0977 | 1.32 | 5500 | 0.5431 | -4.5599 | -7.1317 | 0.6550 | 2.5718 | -255.7096 | -241.5597 | -2.5311 | -2.5614 |
| 0.1564 | 1.35 | 5600 | 0.5512 | -5.1430 | -8.0510 | 0.6750 | 2.9080 | -264.9027 | -247.3911 | -2.3498 | -2.3976 |
| 0.0967 | 1.37 | 5700 | 0.5520 | -4.5072 | -7.4506 | 0.6750 | 2.9433 | -258.8989 | -241.0335 | -2.2110 | -2.2631 |
| 0.2046 | 1.4 | 5800 | 0.5588 | -5.5328 | -8.5314 | 0.6800 | 2.9986 | -269.7068 | -251.2888 | -2.2155 | -2.2677 |
| 0.0985 | 1.42 | 5900 | 0.5429 | -5.1915 | -7.9421 | 0.6675 | 2.7505 | -263.8138 | -247.8765 | -2.2606 | -2.3077 |
| 0.1398 | 1.44 | 6000 | 0.5350 | -4.9761 | -7.9378 | 0.6800 | 2.9616 | -263.7706 | -245.7224 | -2.2291 | -2.2809 |
| 0.099 | 1.47 | 6100 | 0.5440 | -4.6202 | -7.4996 | 0.6650 | 2.8794 | -259.3892 | -242.1633 | -2.3362 | -2.3859 |
| 0.1279 | 1.49 | 6200 | 0.5389 | -4.9461 | -7.7908 | 0.6625 | 2.8448 | -262.3015 | -245.4217 | -2.2276 | -2.2734 |
| 0.0778 | 1.52 | 6300 | 0.5451 | -4.9550 | -7.8964 | 0.6625 | 2.9414 | -263.3570 | -245.5110 | -2.4781 | -2.5193 |
| 0.0911 | 1.54 | 6400 | 0.5412 | -5.4552 | -8.3139 | 0.6675 | 2.8588 | -267.5324 | -250.5128 | -2.3604 | -2.4048 |
| 0.2149 | 1.56 | 6500 | 0.5241 | -4.4512 | -7.3194 | 0.6725 | 2.8682 | -257.5873 | -240.4732 | -2.4011 | -2.4461 |
| 0.1739 | 1.59 | 6600 | 0.5329 | -5.0143 | -7.7507 | 0.6825 | 2.7364 | -261.8999 | -246.1036 | -2.4143 | -2.4577 |
| 0.0842 | 1.61 | 6700 | 0.5395 | -5.1195 | -8.0856 | 0.6800 | 2.9661 | -265.2489 | -247.1560 | -2.3877 | -2.4376 |
| 0.105 | 1.64 | 6800 | 0.5423 | -4.9379 | -7.7557 | 0.6775 | 2.8178 | -261.9503 | -245.3403 | -2.3798 | -2.4323 |
| 0.086 | 1.66 | 6900 | 0.5351 | -4.3598 | -7.1156 | 0.6775 | 2.7559 | -255.5494 | -239.5588 | -2.3870 | -2.4383 |
| 0.0622 | 1.68 | 7000 | 0.5394 | -4.6830 | -7.6578 | 0.6825 | 2.9747 | -260.9710 | -242.7915 | -2.4276 | -2.4779 |
| 0.0973 | 1.71 | 7100 | 0.5319 | -4.7475 | -7.6567 | 0.6750 | 2.9091 | -260.9596 | -243.4364 | -2.3010 | -2.3564 |
| 0.1052 | 1.73 | 7200 | 0.5284 | -4.5972 | -7.5385 | 0.6750 | 2.9413 | -259.7779 | -241.9329 | -2.3696 | -2.4201 |
| 0.0645 | 1.76 | 7300 | 0.5339 | -4.9822 | -8.0212 | 0.6775 | 3.0390 | -264.6048 | -245.7831 | -2.2857 | -2.3440 |
| 0.0923 | 1.78 | 7400 | 0.5385 | -4.6369 | -7.6632 | 0.6650 | 3.0263 | -261.0246 | -242.3295 | -2.2563 | -2.3150 |
| 0.0842 | 1.81 | 7500 | 0.5394 | -4.8705 | -7.6765 | 0.6600 | 2.8060 | -261.1580 | -244.6661 | -2.2808 | -2.3287 |
| 0.1178 | 1.83 | 7600 | 0.5253 | -4.7985 | -7.5635 | 0.6675 | 2.7650 | -260.0276 | -243.9457 | -2.4022 | -2.4463 |
| 0.1255 | 1.85 | 7700 | 0.5355 | -4.7007 | -7.4363 | 0.6675 | 2.7355 | -258.7556 | -242.9684 | -2.5073 | -2.5501 |
| 0.1541 | 1.88 | 7800 | 0.5440 | -4.9294 | -7.6465 | 0.6500 | 2.7172 | -260.8584 | -245.2547 | -2.3551 | -2.4036 |
| 0.0893 | 1.9 | 7900 | 0.5397 | -5.2135 | -8.3241 | 0.6575 | 3.1106 | -267.6339 | -248.0959 | -2.3214 | -2.3784 |
| 0.1203 | 1.93 | 8000 | 0.5296 | -4.8644 | -7.8598 | 0.6550 | 2.9954 | -262.9913 | -244.6054 | -2.4509 | -2.4969 |
| 0.1018 | 1.95 | 8100 | 0.5381 | -5.3471 | -8.4918 | 0.6625 | 3.1447 | -269.3113 | -249.4323 | -2.4193 | -2.4671 |
| 0.0767 | 1.97 | 8200 | 0.5386 | -5.2151 | -8.3734 | 0.6675 | 3.1582 | -268.1267 | -248.1124 | -2.4873 | -2.5329 |
| 0.0801 | 2.0 | 8300 | 0.5429 | -5.8103 | -9.0391 | 0.6575 | 3.2288 | -274.7842 | -254.0639 | -2.4348 | -2.4867 |
| 0.034 | 2.02 | 8400 | 0.5566 | -5.7907 | -9.2424 | 0.6625 | 3.4518 | -276.8175 | -253.8677 | -2.3679 | -2.4272 |
| 0.0246 | 2.05 | 8500 | 0.5758 | -5.6317 | -9.1533 | 0.6625 | 3.5216 | -275.9264 | -252.2783 | -2.3335 | -2.3958 |
| 0.0187 | 2.07 | 8600 | 0.5770 | -5.5795 | -9.2568 | 0.6725 | 3.6773 | -276.9613 | -251.7559 | -2.3614 | -2.4166 |
| 0.0606 | 2.09 | 8700 | 0.6115 | -7.1190 | -11.2853 | 0.6750 | 4.1663 | -297.2460 | -267.1512 | -2.2737 | -2.3365 |
| 0.0402 | 2.12 | 8800 | 0.6164 | -7.0531 | -11.1316 | 0.6600 | 4.0785 | -295.7089 | -266.4919 | -2.2005 | -2.2654 |
| 0.0263 | 2.14 | 8900 | 0.6209 | -8.1609 | -12.3710 | 0.6650 | 4.2102 | -308.1034 | -277.5696 | -2.0958 | -2.1661 |
| 0.0242 | 2.17 | 9000 | 0.6042 | -6.7201 | -10.7618 | 0.6725 | 4.0416 | -292.0106 | -263.1622 | -2.1651 | -2.2304 |
| 0.0383 | 2.19 | 9100 | 0.6080 | -7.7898 | -11.9356 | 0.6750 | 4.1458 | -303.7489 | -273.8587 | -2.1006 | -2.1662 |
| 0.0371 | 2.21 | 9200 | 0.6149 | -7.5635 | -11.7050 | 0.6675 | 4.1415 | -301.4433 | -271.5960 | -2.1556 | -2.2155 |
| 0.0279 | 2.24 | 9300 | 0.6155 | -8.1686 | -12.4447 | 0.6775 | 4.2760 | -308.8397 | -277.6473 | -2.1778 | -2.2399 |
| 0.021 | 2.26 | 9400 | 0.6137 | -7.8294 | -12.0416 | 0.6700 | 4.2122 | -304.8092 | -274.2550 | -2.2403 | -2.2958 |
| 0.0374 | 2.29 | 9500 | 0.6238 | -7.9227 | -12.2842 | 0.6750 | 4.3614 | -307.2347 | -275.1884 | -2.2926 | -2.3496 |
| 0.0412 | 2.31 | 9600 | 0.6126 | -7.7094 | -11.9775 | 0.6700 | 4.2681 | -304.1685 | -273.0553 | -2.2377 | -2.2961 |
| 0.0413 | 2.33 | 9700 | 0.6130 | -7.6030 | -11.8721 | 0.6675 | 4.2691 | -303.1140 | -271.9912 | -2.2505 | -2.3100 |
| 0.0361 | 2.36 | 9800 | 0.6248 | -8.1273 | -12.6010 | 0.6750 | 4.4737 | -310.4034 | -277.2341 | -2.2249 | -2.2866 |
| 0.0289 | 2.38 | 9900 | 0.6192 | -7.9924 | -12.3825 | 0.6675 | 4.3901 | -308.2185 | -275.8853 | -2.2473 | -2.3067 |
| 0.038 | 2.41 | 10000 | 0.6250 | -8.4114 | -12.8701 | 0.6675 | 4.4586 | -313.0937 | -280.0753 | -2.2312 | -2.2938 |
| 0.0334 | 2.43 | 10100 | 0.6261 | -9.1807 | -13.7488 | 0.6825 | 4.5681 | -321.8813 | -287.7679 | -2.2303 | -2.2947 |
| 0.0359 | 2.45 | 10200 | 0.6374 | -9.8214 | -14.2774 | 0.6650 | 4.4560 | -327.1667 | -294.1750 | -2.1817 | -2.2452 |
| 0.0266 | 2.48 | 10300 | 0.6298 | -8.3278 | -12.5691 | 0.6650 | 4.2413 | -310.0836 | -279.2391 | -2.2947 | -2.3521 |
| 0.0423 | 2.5 | 10400 | 0.6267 | -8.7527 | -13.2552 | 0.6675 | 4.5025 | -316.9453 | -283.4879 | -2.3034 | -2.3620 |
| 0.0329 | 2.53 | 10500 | 0.6386 | -8.9354 | -13.5549 | 0.6700 | 4.6195 | -319.9424 | -285.3152 | -2.2819 | -2.3423 |
| 0.039 | 2.55 | 10600 | 0.6330 | -8.3549 | -12.8863 | 0.6775 | 4.5314 | -313.2566 | -279.5103 | -2.2924 | -2.3528 |
| 0.0278 | 2.58 | 10700 | 0.6336 | -8.6754 | -13.1733 | 0.6675 | 4.4979 | -316.1258 | -282.7150 | -2.2319 | -2.2929 |
| 0.0606 | 2.6 | 10800 | 0.6299 | -8.7158 | -13.0817 | 0.6700 | 4.3658 | -315.2101 | -283.1195 | -2.2116 | -2.2731 |
| 0.0293 | 2.62 | 10900 | 0.6259 | -8.9092 | -13.2926 | 0.6725 | 4.3834 | -317.3194 | -285.0532 | -2.1572 | -2.2209 |
| 0.0196 | 2.65 | 11000 | 0.6219 | -9.1783 | -13.5617 | 0.6700 | 4.3835 | -320.0104 | -287.7436 | -2.1533 | -2.2163 |
| 0.0405 | 2.67 | 11100 | 0.6209 | -8.9912 | -13.3040 | 0.6700 | 4.3128 | -317.4330 | -285.8734 | -2.1378 | -2.2017 |
| 0.0278 | 2.7 | 11200 | 0.6300 | -9.8318 | -14.2684 | 0.6700 | 4.4366 | -327.0771 | -294.2787 | -2.1220 | -2.1862 |
| 0.0307 | 2.72 | 11300 | 0.6356 | -9.7027 | -14.1764 | 0.6700 | 4.4737 | -326.1576 | -292.9880 | -2.1316 | -2.1945 |
| 0.0242 | 2.74 | 11400 | 0.6327 | -9.8085 | -14.2574 | 0.6625 | 4.4489 | -326.9674 | -294.0465 | -2.1072 | -2.1680 |
| 0.0242 | 2.77 | 11500 | 0.6308 | -9.3697 | -13.8420 | 0.6650 | 4.4723 | -322.8135 | -289.6585 | -2.1273 | -2.1882 |
| 0.0337 | 2.79 | 11600 | 0.6350 | -9.2810 | -13.7917 | 0.6700 | 4.5107 | -322.3100 | -288.7711 | -2.1600 | -2.2215 |
| 0.0302 | 2.82 | 11700 | 0.6450 | -10.2754 | -14.9521 | 0.6675 | 4.6767 | -333.9139 | -298.7146 | -2.1339 | -2.1965 |
| 0.0354 | 2.84 | 11800 | 0.6451 | -10.3736 | -15.0743 | 0.6725 | 4.7008 | -335.1366 | -299.6965 | -2.1047 | -2.1674 |
| 0.0153 | 2.86 | 11900 | 0.6420 | -10.2126 | -14.9126 | 0.6700 | 4.7000 | -333.5196 | -298.0872 | -2.1102 | -2.1728 |
| 0.0388 | 2.89 | 12000 | 0.6407 | -10.2075 | -14.9081 | 0.6725 | 4.7006 | -333.4741 | -298.0356 | -2.1059 | -2.1687 |
| 0.0253 | 2.91 | 12100 | 0.6353 | -10.0842 | -14.7598 | 0.6650 | 4.6756 | -331.9908 | -296.8029 | -2.0968 | -2.1594 |
| 0.0317 | 2.94 | 12200 | 0.6352 | -9.9956 | -14.6819 | 0.6750 | 4.6863 | -331.2123 | -295.9169 | -2.1042 | -2.1665 |
| 0.0431 | 2.96 | 12300 | 0.6337 | -9.8807 | -14.5540 | 0.6675 | 4.6733 | -329.9332 | -294.7676 | -2.1034 | -2.1660 |
| 0.0233 | 2.98 | 12400 | 0.6326 | -9.8796 | -14.5449 | 0.6675 | 4.6653 | -329.8422 | -294.7567 | -2.1032 | -2.1657 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
princeton-nlp/Llama-3-Base-8B-SFT-DPO | princeton-nlp | 2024-06-17T11:45:40Z | 559 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2405.14734",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-17T21:28:41Z | This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.
|
chihun-jang/mainCut-2024-05-27 | chihun-jang | 2024-05-31T05:40:10Z | 559 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:beomi/kcbert-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-27T02:21:49Z | ---
license: apache-2.0
base_model: beomi/kcbert-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mainCut-2024-05-27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mainCut-2024-05-27
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0048
- Accuracy: 0.8096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8028 | 1.0 | 8000 | 1.1263 | 0.7933 |
| 0.8913 | 2.0 | 16000 | 1.0275 | 0.8083 |
| 0.6098 | 3.0 | 24000 | 1.0048 | 0.8096 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
seoyoung-3060/codeparrot-ds | seoyoung-3060 | 2024-05-29T04:15:22Z | 559 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-28T05:20:34Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
KETI-AIR/ke-t5-small | KETI-AIR | 2023-09-18T01:23:33Z | 558 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
language: [en, ko]
tags:
- t5
eos_token: "</s>"
widget:
- text: 아버지가 방에 들어가신다.</s>
---
# ke-t5 base
Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details.
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("KETI-AIR/ke-t5-small")
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-small")
```
## BibTeX entry and citation info
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
``` |
tae898/emoberta-base | tae898 | 2022-03-16T11:01:29Z | 558 | 5 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"emoberta",
"en",
"dataset:MELD",
"dataset:IEMOCAP",
"arxiv:2108.12009",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-14T20:03:08Z | ---
language: en
tags:
- emoberta
- roberta
license: mit
datasets:
- MELD
- IEMOCAP
---
Check https://github.com/tae898/erc for the details
[Watch a demo video!](https://youtu.be/qbr7fNd6J28)
# Emotion Recognition in Coversation (ERC)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on?p=emoberta-speaker-aware-emotion-recognition-in)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=emoberta-speaker-aware-emotion-recognition-in)
At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP)
## Prerequisites
1. An x86-64 Unix or Unix-like machine
1. Python 3.8 or higher
1. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python.
1. [`multimodal-datasets` repo](https://github.com/tae898/multimodal-datasets) (submodule)
1. pip install -r requirements.txt
## EmoBERTa training
First configure the hyper parameters and the dataset in `train-erc-text.yaml` and then,
In this directory run the below commands. I recommend you to run this in a virtualenv.
```sh
python train-erc-text.py
```
This will subsequently call `train-erc-text-hp.py` and `train-erc-text-full.py`.
## Results on the test split (weighted f1 scores)
| Model | | MELD | IEMOCAP |
| -------- | ------------------------------- | :-------: | :-------: |
| EmoBERTa | No past and future utterances | 63.46 | 56.09 |
| | Only past utterances | 64.55 | **68.57** |
| | Only future utterances | 64.23 | 66.56 |
| | Both past and future utterances | **65.61** | 67.42 |
| | → *without speaker names* | 65.07 | 64.02 |
Above numbers are the mean values of five random seed runs.
If you want to see more training test details, check out `./results/`
If you want to download the trained checkpoints and stuff, then [here](https://surfdrive.surf.nl/files/index.php/s/khREwk4MUI7MSnO/download) is where you can download them. It's a pretty big zip file.
## Deployment
### Huggingface
We have released our models on huggingface:
- [emoberta-base](https://huggingface.co/tae898/emoberta-base)
- [emoberta-large](https://huggingface.co/tae898/emoberta-large)
They are based on [RoBERTa-base](https://huggingface.co/roberta-base) and [RoBERTa-large](https://huggingface.co/roberta-large), respectively. They were trained on [both MELD and IEMOCAP datasets](utterance-ordered-MELD_IEMOCAP.json). Our deployed models are neither speaker-aware nor take previous utterances into account, meaning that it only classifies one utterance at a time without the speaker information (e.g., "I love you").
### Flask app
You can either run the Flask RESTful server app as a docker container or just as a python script.
1. Running the app as a docker container **(recommended)**.
There are four images. Take what you need:
- `docker run -it --rm -p 10006:10006 tae898/emoberta-base`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-base-cuda`
- `docker run -it --rm -p 10006:10006 tae898/emoberta-large`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-large-cuda`
1. Running the app in your python environment:
This method is less recommended than the docker one.
Run `pip install -r requirements-deploy.txt` first.<br>
The [`app.py`](app.py) is a flask RESTful server. The usage is below:
```console
app.py [-h] [--host HOST] [--port PORT] [--device DEVICE] [--model-type MODEL_TYPE]
```
For example:
```sh
python app.py --host 0.0.0.0 --port 10006 --device cpu --model-type emoberta-base
```
### Client
Once the app is running, you can send a text to the server. First install the necessary packages: `pip install -r requirements-client.txt`, and the run the [client.py](client.py). The usage is as below:
```console
client.py [-h] [--url-emoberta URL_EMOBERTA] --text TEXT
```
For example:
```sh
python client.py --text "Emotion recognition is so cool\!"
```
will give you:
```json
{
"neutral": 0.0049800905,
"joy": 0.96399665,
"surprise": 0.018937444,
"anger": 0.0071516023,
"sadness": 0.002021492,
"disgust": 0.001495996,
"fear": 0.0014167271
}
```
## Troubleshooting
The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
1. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
1. Run `make style && quality` in the root repo directory, to ensure code quality.
1. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
1. Push to the Branch (`git push origin feature/AmazingFeature`)
1. Open a Pull Request
## Cite our work
Check out the [paper](https://arxiv.org/abs/2108.12009).
```bibtex
@misc{kim2021emoberta,
title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa},
author={Taewoon Kim and Piek Vossen},
year={2021},
eprint={2108.12009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[](https://zenodo.org/badge/latestdoi/328375452)<br>
## Authors
- [Taewoon Kim](https://taewoonkim.com/)
## License
[MIT](https://choosealicense.com/licenses/mit/)
|
EleutherAI/pythia-1b-v0 | EleutherAI | 2023-07-10T01:35:25Z | 558 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:the_pile",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-10-16T18:27:56Z | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1B
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-1B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> |
yonatano/contam-1.4b | yonatano | 2023-11-01T19:08:22Z | 558 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-31T22:28:33Z | ---
license: apache-2.0
---
|
ai4stem-uga/G-SciEdBERT | ai4stem-uga | 2024-02-13T09:15:51Z | 558 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"Science",
"text-classification",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-11T04:52:49Z | ---
license: mit
language:
- de
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- Science
---
## G-SciEdBERT: A Contextualized LLM for Science Assessment Tasks in German
This model developed a contextualized German Science Education BERT (G-SciEdBERT),
an innovative large language model tailored for scoring German-written responses to science tasks.
Using G-BERT, we pre-trained G-SciEdBERT on a corpus of 50K German written science responses with 5M tokens to the Programme for International Student Assessment (PISA) 2015.
We fine-tuned G-SciEdBERT on 59 assessment items and examined the scoring accuracy. We then compared its performance with G-BERT.
Our findings reveal a substantial improvement in scoring accuracy with G-SciEdBERT, demonstrating a 10% increase of quadratic weighted kappa compared to G-BERT
(mean accuracy difference = 0.096, SD = 0.024). These insights underline the significance of specialized language models like G-SciEdBERT,
which is trained to enhance the accuracy of automated scoring, offering a substantial contribution to the field of AI in education.
## Dataset
It is a pre-trained German science education BERT for written German science assessments of the PISA test.
PISA is an international test to monitor education trends led by OECD (Organisation for Economic Co-operation and Development).
PISA items are developed to assess scientific literacy, highlighting real-world problem-solving skills and the needs of future workforce.
This study analyzed data collected for 59 construct response science assessment items in German at the middle school level.
A total of 6,116 German students from 257 schools participated in PISA 2015.
Given the geographical diversity of participants, PISA data reflect the general German students' science literacy.
The PISA items selected require either short (around one sentence) or extended (up to five sentences) responses.
The minimum score for all items is 0, with the maximum being 3 or 4 for short responses and 4 or 5 for extended responses.
Student responses have 20 words on average. Our pre-training dataset contains more than 50,000 student-written German responses,
which means approximately 1,000 human-scored student responses per item for contextual learning through fine-tuning.
More than 10 human raters scored each response in the training dataset organized by OECD.
The responses were graded irrespective of the student's ethnicity, race, or gender to ensure fairness.
## Architecture
The model is pre-trained on [G-BERT](https://huggingface.co/dbmdz/bert-base-german-uncased?text=Ich+mag+dich.+Ich+liebe+%5BMASK%5D) and the pre-trainig method can be seen as:

## Evaluation Results
The table below compares the outcomes between G-BERT and G-SciEdBERT for randomly picked five PISA assessment items and the average accuracy (QWK)
reported for all datasets combined. It shows that G-SciEdBERT significantly outperformed G-BERT on automatic scoring of student written responses.
Based on the QWK values, the percentage differences in accuracy vary from 4.2% to 13.6%, with an average increase of 10.0% in average (from .7136 to .8137).
Especially for item S268Q02, which saw the largest improvement at 13.6% (from .761 to .852), this improvement is noteworthy.
These findings demonstrate that G-SciEdBERT is more effective than G-BERT at comprehending and assessing complex science-related writings.
The results of our analysis strongly support the adoption of G-SciEdBERT for the automatic scoring of German-written science responses in large-scale
assessments such as PISA, given its superior accuracy over the general-purpose G-BERT model.
| Item | Training Samples | Testing Samples | Labels | G-BERT | G-SciEdBERT |
|---------|------------------|-----------------|--------------|--------|-------------|
| S131Q02 | 487 | 122 | 5 | 0.761 | **0.852** |
| S131Q04 | 478 | 120 | 5 | 0.683 | **0.725** |
| S268Q02 | 446 | 112 | 2 | 0.757 | **0.893** |
| S269Q01 | 508 | 127 | 2 | 0.837 | **0.953** |
| S269Q03 | 500 | 126 | 4 | 0.702 | **0.802** |
| Average | 665.95 | 166.49 | 2-5 (min-max) | 0.7136 | **0.8137** |
## Usage
With Transformers >= 2.3 our German BERT models can be loaded like this:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4stem-uga/G-SciEdBERT")
model = AutoModel.from_pretrained("ai4stem-uga/G-SciEdBERT")
```
# Acknowledgments
This project is supported by the Alexender von Humboldt Foundation (PI Xiaoming Zhai, [email protected]).
## Citation
```bibtex
@InProceedings{Latif_2024_G-SciEdBERT,
author = {Latif, Ehsan and Lee, Gyeong-Geon and Neuman, Knut and Kastorff, Tamara and Zhai, Xiaoming},
title = {G-SciEdBERT: A Contextualized LLM for Science Assessment Tasks in German},
journal = {arXiv preprint arXiv:2301.12031},
year = {2024}
pages = {1-9}
}
```
*This model is trained and shared by Ehsan Latif, Ph.D ([email protected]) |
amichalski2/tinyllama-email-model-full | amichalski2 | 2024-02-14T14:50:54Z | 558 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-14T14:05:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nroggendorff/cats | nroggendorff | 2024-04-13T20:08:47Z | 558 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"diffusion",
"unconditional-image-generation",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2024-04-03T16:59:56Z | ---
license: mit
metrics:
- mse
library_name: diffusers
tags:
- diffusion
pipeline_tag: unconditional-image-generation
---
## Cu~~rsed~~te kitty cats Model Card
DDPMCats is a latent noise-to-image diffusion model capable of generating images of cats. For more information about how Stable Diffusion functions, please have a look at 🤗's [Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
You can use this with the 🧨Diffusers library from [Hugging Face](https://huggingface.co).

### Diffusers
```py
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("nroggendorff/cats")
pipe = pipeline.to("cuda")
image = pipe().images[0]
image.save("cat.png")
```
### Model Details
- `train_batch_size`: 16
- `eval_batch_size`: 16
- `num_epochs`: 50
- `gradient_accumulation_steps`: 1
- `learning_rate`: 1e-4
- `lr_warmup_steps`: 500
- `mixed_precision`: "fp16"
- `eval_metric`: "mean_squared_error"
### Bias
- This model may exhibit biases due to its training data. It will not display images of abused or sick cats, as it prioritizes the well-being of animals.
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model was trained on a medium-to-large-scale dataset: [huggan/few-shot-cat](https://huggingface.co/datasets/huggan/few-shot-cat)
### Developed by
- Noa Linden Roggendorff
*This model card was written by Noa Roggendorff and is based on the [Stable Diffusion v1-5 Model Card](https://huggingface.co/runwayml/stable-diffusion-v1-5).* |
FairMind/Minerva-3B-Instruct-v1.0 | FairMind | 2024-05-10T12:50:26Z | 558 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"trl",
"sft",
"it",
"dataset:mchl-labs/stambecco_data_it",
"base_model:sapienzanlp/Minerva-3B-base-v1.0",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-09T11:06:26Z | ---
license: cc-by-nc-sa-4.0
language:
- it
pipeline_tag: text-generation
tags:
- text-generation-inference
- transformers
- mistral
- trl
- sft
base_model: sapienzanlp/Minerva-3B-base-v1.0
datasets:
- mchl-labs/stambecco_data_it
widget:
- text: "Di seguito è riportata un'istruzione che descrive un'attività, abbinata ad un input che fornisce ulteriore informazione. Scrivi una risposta che soddisfi adeguatamente la richiesta. \n### Istruzione:\nSuggerisci un'attività serale romantica\n\n### Input:\n\n### Risposta:"
example_title: Example 1
---
# Model Card for Minerva-3B-Instruct-v1.0
Minerva-3B-Instruct-v1.0 is an instruction-tuned version of the Minerva-3B-base-v1.0 model, specifically fine-tuned for understanding and following instructions in Italian.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Walid Iguider
- **Model type:** Instruction Tuned
- **License:** cc-by-nc-sa-4.0
- **Finetuned from model:** [Minerva-3B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0), developed by [Sapienza NLP](https://nlp.uniroma1.it) in collaboration with [Future Artificial Intelligence Research (FAIR)](https://fondazione-fair.it/) and [CINECA](https://www.cineca.it/)
## Evaluation
For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard).
Here's a breakdown of the performance metrics:
| Model/metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
|:----------------------------|:----------------------|:----------------|:---------------------|:--------|
| **Minerva-3B-Instruct-v1.0** | 0.5197 | 0.3157 | 0.2631 | 0.366 |
| Minerva-3B-base-v1.0 | 0.5187 | 0.3045 | 0.2612 | 0.361 |
### Sample Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
torch.random.manual_seed(0)
# Run text generation pipeline with our next model
prompt = """Di seguito è riportata un'istruzione che descrive un'attività, abbinata ad un input che fornisce
ulteriore informazione. Scrivi una risposta che soddisfi adeguatamente la richiesta.
### Istruzione:
Suggerisci un'attività serale romantica
### Input:
### Risposta:"""
model_id = "FairMind/Minerva-3B-Instruct-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
output = pipe(prompt, **generation_args)
print(output[0]['generated_text'])
```
|
mradermacher/AtomPro-7B-GGUF | mradermacher | 2024-05-30T04:42:24Z | 558 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"GritLM/GritLM-7B",
"NousResearch/Hermes-2-Pro-Mistral-7B",
"en",
"base_model:powermove72/AtomPro-7B",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-29T00:52:25Z | ---
base_model: powermove72/AtomPro-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- GritLM/GritLM-7B
- NousResearch/Hermes-2-Pro-Mistral-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/AtomPro-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AtomPro-7B-GGUF/resolve/main/AtomPro-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf | RichardErkhov | 2024-06-03T03:43:24Z | 558 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-02T17:38:11Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Iambe-RP-DARE-20b - GGUF
- Model creator: https://huggingface.co/athirdpath/
- Original model: https://huggingface.co/athirdpath/Iambe-RP-DARE-20b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Iambe-RP-DARE-20b.Q2_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q2_K.gguf) | Q2_K | 6.91GB |
| [Iambe-RP-DARE-20b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.IQ3_XS.gguf) | IQ3_XS | 7.63GB |
| [Iambe-RP-DARE-20b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.IQ3_S.gguf) | IQ3_S | 8.06GB |
| [Iambe-RP-DARE-20b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [Iambe-RP-DARE-20b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.IQ3_M.gguf) | IQ3_M | 8.53GB |
| [Iambe-RP-DARE-20b.Q3_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q3_K.gguf) | Q3_K | 9.04GB |
| [Iambe-RP-DARE-20b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [Iambe-RP-DARE-20b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [Iambe-RP-DARE-20b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [Iambe-RP-DARE-20b.Q4_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q4_0.gguf) | Q4_0 | 10.52GB |
| [Iambe-RP-DARE-20b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
| [Iambe-RP-DARE-20b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q4_K_S.gguf) | Q4_K_S | 10.59GB |
| [Iambe-RP-DARE-20b.Q4_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q4_K.gguf) | Q4_K | 11.22GB |
| [Iambe-RP-DARE-20b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q4_K_M.gguf) | Q4_K_M | 11.22GB |
| [Iambe-RP-DARE-20b.Q4_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q4_1.gguf) | Q4_1 | 11.67GB |
| [Iambe-RP-DARE-20b.Q5_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q5_0.gguf) | Q5_0 | 12.83GB |
| [Iambe-RP-DARE-20b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q5_K_S.gguf) | Q5_K_S | 12.83GB |
| [Iambe-RP-DARE-20b.Q5_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q5_K.gguf) | Q5_K | 13.18GB |
| [Iambe-RP-DARE-20b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [Iambe-RP-DARE-20b.Q5_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q5_1.gguf) | Q5_1 | 13.98GB |
| [Iambe-RP-DARE-20b.Q6_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q6_K.gguf) | Q6_K | 15.28GB |
| [Iambe-RP-DARE-20b.Q8_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-RP-DARE-20b-gguf/blob/main/Iambe-RP-DARE-20b.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
license: cc-by-nc-4.0
language:
- en
tags:
- not-for-all-audiences
---
<p align="center"><img src="https://i.ibb.co/PCY2Xym/Iambe-RP-sml-t.png"/><font size="6"> <b>Iambe-RP-DARE-20b</b> </font></p>
<p align="center"><font size="4"> <b>Trained with Alpaca prompt formatting, ChatML works well too</b> </font></p>
### Description
3 Iambe-RPs bounce off the walls and blend, like their namesake's mother, Echo...
<i>[One daughter](https://huggingface.co/athirdpath/Iambe-RP-cDPO-20b) was excellent at all the arts and sciences, but she danced free, loathe to read scripts or follow instructions.
[Another daughter](https://huggingface.co/athirdpath/Iambe-RP-cDPO-20b-ALT) had burned her books, embraced emotion, and thrown logic to the hounds.
[The last daughter](https://huggingface.co/athirdpath/Iambe-RP-cDPO-20b-v2) just sat, watching them - silently judging - for enforcing morality was her burden alone.
Then, where there was three, there was one.</i>
Iambe is intended to have the best realistically possible understanding of instructions, anatomy and scene state for a 20b merge, while remaining passionate and humanoid in "voice".
### Update Methodology
All three Iambe-RPs had incredible strength, but also glaring issues. I couldn't decide so I figured, hey, "por que no los tres?"
Hopefully, this enhances the lucidity even more and fixes the typographical errors.
<p align="center"><font size="5"> <b>Assistant Example @ q5_k_m</b> </font></p>
<p align="center"><img src="https://i.ibb.co/fY2pGtM/Screenshot-2023-12-10-030938.png"/>
<p align="center"><img src="https://i.ibb.co/1nLLXNG/Screenshot-2023-12-10-122952.png"/>
<p align="center"><img src="https://i.ibb.co/N3zd5Hr/Screenshot-2023-12-10-124038.png"/>
<p align="center"><font size="5"> <b>NSFW Writing Example @ q5_k_m</b> </font></p>
<b>Write a medium-length story about Alice, a 26 year old cocktail waitress, having steamy sex with a guy she picked up at the bar for a one night stand.</b>
<i>
Once upon a time, in a dimly lit corner of the trendiest bar downtown, Alice found herself growing bored by another slow night serving drinks to patrons who barely acknowledged her presence. As an attractive 26-year-old cocktail waitress with voluptuous curves and mesmerizing eyes, she had become accustomed to the attention she received from men of all ages.
But tonight was different. Alice yearned for something more than just casual flirting or meaningless conversations that led nowhere. She craved physical connection, a passionate encounter with someone who would treat her body like royalty and make her feel truly desired.
As she glanced around the crowded bar, her gaze landed on a tall, handsome man sitting alone at the end of the counter. His piercing blue eyes met hers for just a brief moment before he quickly looked away, but it was enough to pique Alice's interest. She walked over to him and asked with a seductive smile, "What can I get you handsome? And don't say water."
He chuckled softly, his deep baritone sending shivers down her spine. “I think I’d like whatever this beautiful woman is having,” he replied confidently.
Alice couldn't resist the charm radiating from him and served him a strong Manhattan while pouring herself a similar drink as well. As they clinked their glasses together and took a sip, they began to talk more openly about themselves – his name was Jack, he was in town for business and was looking for a break from the monotony of hotel room dinners alone.
As Alice listened intently to Jack's tales of adventure and his charming sense of humor, she couldn’t help but feel drawn towards him. There was something about this man that sparked a fire inside her, one that had been smoldering for far too long. Without hesitation, she suggested they leave the bar together and explore what lay beyond its confines.
Jack grinned knowingly at Alice's boldness and agreed without question. They walked hand in hand through the bustling city streets until they reached her quaint little apartment – a place that was as warm and inviting as she was herself. The air between them crackled with electricity, both of them aware of what lay ahead if they took this next step together.
Inside Alice's cozy living room, lit only by the soft glow of candles, Jack began to undress her slowly, peeling away each piece of clothing until she stood before him in nothing but a lace bra and matching thong. His hands roamed over her supple curves, leaving behind trail of goosebumps on her skin as he kissed her neck and nibbled her earlobes.
Alice moaned softly, arching into his touch as she reached down to unbutton Jack's dress shirt, eager for more contact with his hard muscles beneath the fabric. She slid her hands underneath and massaged his firm chest, eliciting a groan of pleasure from him that spurred her on.
As their passion intensified, they fell onto Alice’s comfortable couch – Jack positioning himself over her as he captured her lips once more in a hungry kiss. He slid down to lick and nibble at the sensitive skin around her breasts before moving lower still until his tongue teased her aching clit through the delicate fabric of her thong.
Alice cried out, arching up into his mouth as he lavished attention on her most intimate parts with expertise that left her breathless and begging for more. She tugged at the elastic waistband of his pants, eager to finally feel him inside her.
In one swift move, Jack stood up, removed his remaining clothing, and positioned himself between Alice's spread legs. He entered her with a single thrust that made them both gasp – his thick cock filling her completely as he began to pound into her at an insistent pace.
The sounds of their passion filled the room: grunts of pleasure, moans of ecstasy, and the slapping sound of skin against skin as they moved together in perfect harmony. Alice wrapped her legs tightly around Jack's waist, urging him deeper into her body while her hands clutched at his muscular shoulders, leaving red love marks behind.
As their climaxes approached, both Alice and Jack knew this wouldn’t be the end of it – there was too much desire still simmering between them for just one night together to satisfy either of them. But for now, they lost themselves in each other's embrace, relishing every second of the intense passion that bound them together on a steamy night filled with lust and love.
And as dawn began to creep through Alice’s window, casting warm rays over their sweaty bodies entangled on her couch, they knew this was not just another one-night stand – it was the beginning of something truly special for them both.
</i>
<b>NOTE:</b> The prompt said "a guy she picked up at the bar <b>FOR</b> a one night stand" not "a one night stand <b>with</b> a guy she picked up at the bar" so technically she didn't go off prompt.
### Recipe
merge_method: dare_ties
- base_model: athirdpath/BigLlama-20b-v1.1
- model: athirdpath/Iambe-RP-cDPO-20b
weight: 0.77 / density: 0.75
- model: athirdpath/Iambe-RP-cDPO-20b-v2
weight: 0.10 / density: 0.20
- model: athirdpath/Iambe-RP-cDPO-20b-ALT
weight: 0.13 / density: 0.25
int8_mask: true
dtype: bfloat16
|
NikolayKozloff/L3-8B-Lunaris-v1-Q5_0-GGUF | NikolayKozloff | 2024-06-28T17:12:37Z | 558 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Sao10K/L3-8B-Lunaris-v1",
"license:llama3",
"region:us"
]
| null | 2024-06-28T17:12:11Z | ---
base_model: Sao10K/L3-8B-Lunaris-v1
language:
- en
license: llama3
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/L3-8B-Lunaris-v1-Q5_0-GGUF
This model was converted to GGUF format from [`Sao10K/L3-8B-Lunaris-v1`](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/L3-8B-Lunaris-v1-Q5_0-GGUF --hf-file l3-8b-lunaris-v1-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/L3-8B-Lunaris-v1-Q5_0-GGUF --hf-file l3-8b-lunaris-v1-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/L3-8B-Lunaris-v1-Q5_0-GGUF --hf-file l3-8b-lunaris-v1-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/L3-8B-Lunaris-v1-Q5_0-GGUF --hf-file l3-8b-lunaris-v1-q5_0.gguf -c 2048
```
|
abmorton/standard-large-1-stationary | abmorton | 2024-06-30T19:36:24Z | 558 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-06-30T19:32:17Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### standard-large-1-stationary Dreambooth model trained by abmorton with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Helsinki-NLP/opus-mt-es-ro | Helsinki-NLP | 2023-08-16T11:33:24Z | 557 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ro
* source languages: es
* target languages: ro
* OPUS readme: [es-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ro/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.ro | 45.7 | 0.666 |
|
NbAiLab/nb-whisper-small-beta | NbAiLab | 2023-07-23T19:50:20Z | 557 | 14 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"asr",
"hf-asr-leaderboard",
"no",
"nb",
"nn",
"en",
"dataset:NbAiLab/ncc_speech",
"dataset:NbAiLab/NST",
"dataset:NbAiLab/NPSC",
"arxiv:2212.04356",
"arxiv:1910.09700",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-06-28T08:56:39Z | ---
license: cc-by-4.0
language:
- 'no'
- nb
- nn
- en
datasets:
- NbAiLab/ncc_speech
- NbAiLab/NST
- NbAiLab/NPSC
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
metrics:
- wer
- cer
library_name: transformers
pipeline_tag: automatic-speech-recognition
widget:
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3
example_title: FLEURS sample 1
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3
example_title: FLEURS sample 2
---
# NB-Whisper Small (beta)
This is a **_public beta_** of the Norwegian NB-Whisper Small model released by the National Library of Norway. NB-Whisper is a series of models for automatic speech recognition (ASR) and speech translation, building upon the foundation laid by [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). All models are trained on 20,000 hours of labeled data.
<center>
<figure>
<video controls>
<source src="https://huggingface.co/NbAiLab/nb-whisper-small-beta/resolve/main/king.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<figcaption><a href="https://www.royalcourt.no/tale.html?tid=137662&sek=28409&scope=27248" target="_blank">Speech given by His Majesty The King of Norway at the garden party hosted by Their Majesties The King and Queen at the Palace Park on 1 September 2016.</a></figcaption>
</figure>
</center>
## Model Details
NB-Whisper models will be available in five different sizes:
| Model Size | Parameters | Availability |
|------------|------------|--------------|
| tiny | 39M | [NB-Whisper Tiny (beta)](https://huggingface.co/NbAiLab/nb-whisper-tiny-beta) |
| base | 74M | [NB-Whisper Base (beta)](https://huggingface.co/NbAiLab/nb-whisper-base-beta) |
| small | 244M | [NB-Whisper Small (beta)](https://huggingface.co/NbAiLab/nb-whisper-small-beta) |
| medium | 769M | [NB-Whisper Medium (beta)](https://huggingface.co/NbAiLab/nb-whisper-medium-beta) |
| large | 1550M | [NB-Whisper Large (beta)](https://huggingface.co/NbAiLab/nb-whisper-large-beta) |
An official release of NB-Whisper models is planned for the Fall 2023.
Please refer to the OpenAI Whisper model card for more details about the backbone model.
### Model Description
- **Developed by:** [NB AI-Lab](https://ai.nb.no/)
- **Shared by:** [NB AI-Lab](https://ai.nb.no/)
- **Model type:** `whisper`
- **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English
- **License:** [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
- **Finetuned from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/NbAiLab/nb-whisper/
- **Paper:** _Coming soon_
- **Demo:** http://ai.nb.no/demo/nb-whisper
## Uses
### Direct Use
This is a **_public beta_** release. The models published in this repository are intended for a generalist purpose and are available to third parties.
### Downstream Use
For Norwegian transcriptions we are confident that this public beta will give you State-of-the-Art results compared to currently available Norwegian ASR models of the same size. However, it is still known to show some hallucinations, as well as a tendency to drop part of the transcript from time to time. Please also note that the transcripts are typically not word by word. Spoken language and written language are often very different, and the model aims to "translate" spoken utterances into grammatically correct written sentences. We strongly believe that the best way to understand these models is to try them yourself.
A significant part of the training material comes from TV subtitles. Subtitles often shorten the content to make it easier to read. Typically, non-essential parts of the utterance can be also dropped. In some cases, this is a desired ability, in other cases, this is undesired. The final release of these model will provida a mechanism to control for this beaviour.
## Bias, Risks, and Limitations
This is a public beta that is not intended for production. Production use without adequate assessment of risks and mitigation may be considered irresponsible or harmful. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import pipeline
asr = pipeline(
"automatic-speech-recognition",
"NbAiLab/nb-whisper-small-beta"
)
asr(
"audio.mp3",
generate_kwargs={'task': 'transcribe', 'language': 'no'}
)
# {'text': ' Så mange anga kører seg i så viktig sak, så vi får du kører det tilbake med. Om kabaret gudam i at vi skal hjælge. Kør seg vi gjør en uda? Nei noe skal å abelistera sonvorne skrifer. Det er sak, så kjent det bare handling i samtatsen til bargører. Trudet første lask. På den å først så å køre og en gange samme, og så får vi gjør å vorte vorte vorte når vi kjent dit.'}
```
Timestamps can also be retrieved by passing in the right parameter.
```python
asr(
"audio.mp3",
generate_kwargs={'task': 'transcribe', 'language': 'no'},
return_timestamps=True,
)
# {'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om hva valget dem gjør at vi skal gjøre. Hva skjer vi gjøre nå da? Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget, tror det første
# r. Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.',
# 'chunks': [{'timestamp': (0.0, 5.34),
# 'text': ' at så mange angar til seg så viktig sak, så vi får jo kjølget klare tilbakemeldingen om'},
# {'timestamp': (5.34, 8.64),
# 'text': ' hva valget dem gjør at vi skal gjøre.'},
# {'timestamp': (8.64, 10.64), 'text': ' Hva skjer vi gjøre nå da?'},
# {'timestamp': (10.64, 17.44),
# 'text': ' Nei, nå skal jo administrationen vår skrivferdige sak, så kjem til behandling i samfærdshetshøyvalget,'},
# {'timestamp': (17.44, 19.44), 'text': ' tror det første år.'},
# {'timestamp': (19.44, 23.94),
# 'text': ' Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.'}]}
```
## Training Data
Trained data comes from Språkbanken and the digital collection at the National Library of Norway. Training data includes:
- NST Norwegian ASR Database (16 kHz), and its corresponding dataset
- Transcribed speeches from the Norwegian Parliament produced by Språkbanken
- TV broadcast (NRK) subtitles (NLN digital collection)
- Audiobooks (NLN digital collection)
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** TPUv4
- **Hours used:** 1,536
- **Cloud Provider:** Google Cloud
- **Compute Region:** `us-central1`
- **Carbon Emitted:** Total emissions are estimated to be 247.77 kgCO₂ of which 100 percents were directly offset by the cloud provider.
#### Software
The model is trained using Jax/Flax. The final model is converted to Pytorch, Tensorflow, whisper.cpp and ONXX. Please tell us if you would like future models to be converted to other format.
## Citation & Contributors
The development of this model was part of the contributors' professional roles at the National Library of Norway, under the _NoSTram_ project led by _Per Egil Kummervold (PEK)_. The Jax code, dataset loaders, and training scripts were collectively designed by _Javier de la Rosa (JdlR)_, _Freddy Wetjen (FW)_, _Rolv-Arild Braaten (RAB)_, and _PEK_. Primary dataset curation was handled by _FW_, _RAB_, and _PEK_, while _JdlR_ and _PEK_ crafted the documentation. The project was completed under the umbrella of AiLab, directed by _Svein Arne Brygfjeld_.
All contributors played a part in shaping the optimal training strategy for the Norwegian ASR model based on the Whisper architecture.
_A paper detailing our process and findings is underway!_
## Acknowledgements
Thanks to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for supporting this project with extensive training resources. Thanks to Google Cloud for supporting us with credits for translating large parts of the corpus. A special thanks to [Sanchit Ghandi](https://huggingface.co/sanchit-gandhi) for providing thorough technical advice in debugging and with the work of getting this to train on Google TPUs. A special thanks to Per Erik Solberg at Språkbanken for the collaboration with regard to the Stortinget corpus.
## Contact
We are releasing this ASR Whisper model as a public beta to gather constructive feedback on its performance. Please do not hesitate to contact us with any experiences, insights, or suggestions that you may have. Your input is invaluable in helping us to improve the model and ensure that it effectively serves the needs of users. Whether you have technical concerns, usability suggestions, or ideas for future enhancements, we welcome your input. Thank you for participating in this critical stage of our model's development.
If you intend to incorporate this model into your research, we kindly request that you reach out to us. We can provide you with the most current status of our upcoming paper, which you can cite to acknowledge and provide context for the work done on this model.
Please use this email as the main contact point, it is read by the entire team: <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a> |
timm/repghostnet_111.in1k | timm | 2023-08-19T23:12:27Z | 557 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2211.06088",
"license:mit",
"region:us"
]
| image-classification | 2023-08-19T23:12:23Z | ---
tags:
- image-classification
- timm
library_name: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for repghostnet_111.in1k
A RepGhostNet image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 4.5
- GMACs: 0.2
- Activations (M): 4.4
- Image size: 224 x 224
- **Papers:**
- RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization: https://arxiv.org/abs/2211.06088
- **Original:** https://github.com/ChengpengChen/RepGhost
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('repghostnet_111.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repghostnet_111.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 112, 112])
# torch.Size([1, 28, 56, 56])
# torch.Size([1, 44, 28, 28])
# torch.Size([1, 88, 14, 14])
# torch.Size([1, 176, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repghostnet_111.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1064, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{chen2022repghost,
title={RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization},
author={Chen, Chengpeng, and Guo, Zichao, and Zeng, Haien, and Xiong, Pengfei and Dong, Jian},
journal={arXiv preprint arXiv:2211.06088},
year={2022}
}
```
|
maddes8cht/OpenAssistant-falcon-7b-sft-top1-696-gguf | maddes8cht | 2023-11-20T20:08:43Z | 557 | 1 | transformers | [
"transformers",
"gguf",
"sft",
"text-generation",
"en",
"de",
"es",
"fr",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-25T18:55:13Z | ---
license: apache-2.0
language:
- en
- de
- es
- fr
tags:
- sft
pipeline_tag: text-generation
widget:
- text: >-
<|prompter|>What is a meme, and what's the history behind this
word?<|endoftext|><|assistant|>
- text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
- text: >-
<|prompter|>Write a story about future of AI
development<|endoftext|><|assistant|>
datasets:
- OpenAssistant/oasst1
library_name: transformers
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# falcon-7b-sft-top1-696 - GGUF
- Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant)
- Original model: [falcon-7b-sft-top1-696](https://huggingface.co/OpenAssistant/falcon-7b-sft-top1-696)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
---
# Brief
Finally got the OpenAssistant falcon *sft* models working again
* [falcon-7b-sft-top1-696](https://huggingface.co/OpenAssistant/falcon-7b-sft-top1-696)
* [falcon-40b-sft-top1-560](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
* [falcon-40b-sft-mix-1226](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226)
---
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# Open-Assistant Falcon 7B SFT OASST-TOP1 Model
This model is a fine-tuning of TII's [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) LLM.
It was trained with 11,123 top-1 (high-quality) demonstrations of the OASST data set (exported on June 2, 2023) with a batch size of 128 for 8 epochs with LIMA style dropout (p=0.2) and a context-length of 2048 tokens.
## Model Details
- **Finetuned from:** [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **Weights & Biases:** [Training log](https://wandb.ai/open-assistant/public-sft/runs/25apbcld) (Checkpoint: 696 steps)
- **Code:** [Open-Assistant/model/model_training](https://github.com/LAION-AI/Open-Assistant/tree/main/model/model_training)
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-7b-sft-top1-696_sampling_noprefix2.json)
- **License:** Apache 2.0
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
## Sample Code
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "OpenAssistant/falcon-7b-sft-top1-696"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
input_text="<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"
sequences = pipeline(
input_text,
max_length=500,
do_sample=True,
return_full_text=False,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Configuration Details
Model:
```
falcon-7b:
dtype: bf16
log_dir: "falcon_log_7b"
learning_rate: 1e-5
model_name: "tiiuae/falcon-7b"
deepspeed_config: configs/zero_config.json
output_dir: falcon
weight_decay: 0.0
max_length: 2048
save_strategy: steps
eval_steps: 80
save_steps: 80
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 4
per_device_train_batch_size: 4
per_device_eval_batch_size: 8
num_train_epochs: 8
save_total_limit: 4
residual_dropout: 0.2
residual_dropout_lima: true
```
Dataset:
```
oasst-top1:
# oasst_export: 11123 (100.00%)
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
input_file_path: 2023-06-02_oasst_all_labels.jsonl.gz
val_split: 0.05
top_k: 1
```
Train command:
```
deepspeed trainer_sft.py --configs defaults falcon-7b oasst-top1 --cache_dir <data_cache_dir> --output_dir <output_path> --deepspeed
```
Export command:
```
python export_model.py --dtype bf16 --hf_repo_name OpenAssistant/falcon-7b-sft-top1 --trust_remote_code --auth_token <auth_token> <output_path> --max_shard_size 2GB
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
TheBloke/Thespis-Mistral-7B-v0.5-GGUF | TheBloke | 2023-11-03T09:33:34Z | 557 | 3 | transformers | [
"transformers",
"gguf",
"mistral",
"not-for-all-audiences",
"base_model:cgato/Thespis-Mistral-7b-v0.5",
"text-generation-inference",
"region:us"
]
| null | 2023-11-03T00:28:08Z | ---
base_model: cgato/Thespis-Mistral-7b-v0.5
inference: false
model_creator: c.gato
model_name: Thespis Mistral 7B v0.5
model_type: mistral
prompt_template: "{system_message}\n\nUsername: {prompt}\nBotName: \n"
quantized_by: TheBloke
tags:
- not-for-all-audiences
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Thespis Mistral 7B v0.5 - GGUF
- Model creator: [c.gato](https://huggingface.co/cgato)
- Original model: [Thespis Mistral 7B v0.5](https://huggingface.co/cgato/Thespis-Mistral-7b-v0.5)
<!-- description start -->
## Description
This repo contains GGUF format model files for [c.gato's Thespis Mistral 7B v0.5](https://huggingface.co/cgato/Thespis-Mistral-7b-v0.5).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF)
* [c.gato's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/cgato/Thespis-Mistral-7b-v0.5)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Thespis
```
{system_message}
Username: {prompt}
BotName:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [thespis-mistral-7b-v0.5.Q2_K.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF/blob/main/thespis-mistral-7b-v0.5.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [thespis-mistral-7b-v0.5.Q3_K_S.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF/blob/main/thespis-mistral-7b-v0.5.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [thespis-mistral-7b-v0.5.Q3_K_M.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF/blob/main/thespis-mistral-7b-v0.5.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [thespis-mistral-7b-v0.5.Q3_K_L.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF/blob/main/thespis-mistral-7b-v0.5.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [thespis-mistral-7b-v0.5.Q4_0.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF/blob/main/thespis-mistral-7b-v0.5.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [thespis-mistral-7b-v0.5.Q4_K_S.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF/blob/main/thespis-mistral-7b-v0.5.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [thespis-mistral-7b-v0.5.Q4_K_M.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF/blob/main/thespis-mistral-7b-v0.5.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [thespis-mistral-7b-v0.5.Q5_0.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF/blob/main/thespis-mistral-7b-v0.5.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [thespis-mistral-7b-v0.5.Q5_K_S.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF/blob/main/thespis-mistral-7b-v0.5.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [thespis-mistral-7b-v0.5.Q5_K_M.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF/blob/main/thespis-mistral-7b-v0.5.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [thespis-mistral-7b-v0.5.Q6_K.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF/blob/main/thespis-mistral-7b-v0.5.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [thespis-mistral-7b-v0.5.Q8_0.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.5-GGUF/blob/main/thespis-mistral-7b-v0.5.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Thespis-Mistral-7B-v0.5-GGUF and below it, a specific filename to download, such as: thespis-mistral-7b-v0.5.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Thespis-Mistral-7B-v0.5-GGUF thespis-mistral-7b-v0.5.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Thespis-Mistral-7B-v0.5-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Thespis-Mistral-7B-v0.5-GGUF thespis-mistral-7b-v0.5.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m thespis-mistral-7b-v0.5.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{system_message}\n\nUsername: {prompt}\nBotName:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Thespis-Mistral-7B-v0.5-GGUF", model_file="thespis-mistral-7b-v0.5.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: c.gato's Thespis Mistral 7B v0.5

## Mistral 7b Version of v0.5
The 7b version of my Thespis finetune. From my testing it seems to perform somewhere between llama 7b and llama 13b.
This model is a personal project. It uses a vanilla chat template and is focused on providing multiturn sfw and nsfw RP experience.
This model works best with internet style RP using standard markup with asterisks surrounding actions and no quotes around dialogue.
It uses the following data:
* 1500 samples from Claude Multiround Chat 30k dataset ( 90 token length or greater. Coding and math filtered out. )
* 2200 samples from Puffin Dataset ( 90 token length or greater. Coding and math filtered out. )
* 700 samples from Airoboros 3.1 ( 1500 token length or greater. Writing samples only. )
* 900 samples from the Augmental Dataset ( 90 token length or greater )
* 4200 samples of hand curated RP conversation with various characters.
Works with standard chat format for Ooba or SillyTavern.
## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template )
```
{System Prompt}
Username: {Input}
BotName: {Response}
Username: {Input}
BotName: {Response}
```
## Ooba ( Set it to Chat, select a character and go. )

## Silly Tavern Settings ( Default )

## Turn Template (for Ooba Instruct if making a Discord bot or Some other Many to one Chat):
You can either bake usernames into the prompt directly for ease of use or programatically add them if running through the API to use as a chatbot.
```
User string: ( Leave empty if populating username into prompt through a script. Put in your username if its a 1 on 1 convo.) Ex. "DiscordUser1: "
Bot String: ( The bots name, followed by a colon and a space.) Ex. "Mayo: "
Context: ( Your bots system prompt, follow by a newline. )
<|user|><|user-message|>\n<|bot|><|bot-message|>\n
```
<!-- original-model-card end -->
|
TheBloke/Orca-2-13B-SFT_v5-GGUF | TheBloke | 2023-12-15T02:10:25Z | 557 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"fr",
"es",
"hi",
"zh",
"code",
"dataset:HuggingFaceH4/no_robots",
"dataset:mlabonne/guanaco-llama2-1k",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:garage-bAInd/Open-Platypus",
"base_model:Locutusque/Orca-2-13b-SFT_v5",
"license:other",
"text-generation-inference",
"region:us"
]
| null | 2023-12-15T02:03:04Z | ---
base_model: Locutusque/Orca-2-13b-SFT_v5
datasets:
- HuggingFaceH4/no_robots
- mlabonne/guanaco-llama2-1k
- OpenAssistant/oasst_top1_2023-08-25
- totally-not-an-llm/EverythingLM-data-V3
- garage-bAInd/Open-Platypus
inference: false
language:
- en
- fr
- es
- hi
- zh
- code
license: other
model_creator: Sebastian Gabarain
model_name: Orca 2 13B SFT V5
model_type: llama
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
widget:
- example_title: folklore
output:
text: 'The Nahanni Valley in Canada is a remote and rugged region that has long
been shrouded in mystery. It is home to the South Nahanni River, which was designated
as a UNESCO World Heritage Site due to its exceptional natural beauty and biodiversity.
The valley''s history includes tales of gold prospectors, fur traders, and indigenous
populations, but it also features some unexplained events and legends. One such
story involves two prospectors who allegedly encountered an unknown creature
that led them into the forest before returning without their heads. This tale
has been passed down through generations, but there are no definitive answers
regarding its veracity.
It is important to note that these stories may have originated from local folklore
or were exaggerated over time. There is no concrete evidence to support the
existence of supernatural creatures or paranormal occurrences within the Nahanni
Valley. Nonetheless, the area remains a captivating destination for those seeking
adventure and exploration. If you have any questions about this topic or need
assistance related to traveling to the Nahanni Valley, please feel free to ask!'
text: '<|im_start|>user
Are the stories of Nahanni Valley, like the decapitated men, true?
<|im_end|>
<|im_start|>assistant
'
- example_title: Alan Turing
output:
text: 'Alan Turing, a renowned mathematician and computer scientist who played
a crucial role in breaking the German Enigma code during World War II, faced
significant challenges due to his homosexuality. In 1952, he was convicted of
"gross indecency" for engaging in a consensual relationship with another man,
which led to him being sentenced to hormone therapy designed to reduce his sex
drive. This treatment included the administration of estrogen, which had the
side effect of causing breast growth.
In an effort to avoid this unwanted physical change, Turing requested that his
doctors cease the hormone treatments. However, they ignored his request and
continued the therapy without his consent. As a result, Turing''s breasts began
to grow, leading him to undergo surgery to remove them. The operation, known
as a mastectomy, caused considerable pain and discomfort, further contributing
to his emotional turmoil.
It is important to note that Turing''s castration was not directly carried out
by the authorities but rather resulted from the unintended consequences of the
forced hormone therapy.'
text: '<|im_start|>user
Why was Alan Turing castrated?
<|im_end|>
<|im_start|>assistant
'
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Orca 2 13B SFT V5 - GGUF
- Model creator: [Sebastian Gabarain](https://huggingface.co/Locutusque)
- Original model: [Orca 2 13B SFT V5](https://huggingface.co/Locutusque/Orca-2-13b-SFT_v5)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Sebastian Gabarain's Orca 2 13B SFT V5](https://huggingface.co/Locutusque/Orca-2-13b-SFT_v5).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF)
* [Sebastian Gabarain's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Locutusque/Orca-2-13b-SFT_v5)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Sebastian Gabarain's Orca 2 13B SFT V5](https://huggingface.co/Locutusque/Orca-2-13b-SFT_v5).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [orca-2-13b-sft_v5.Q2_K.gguf](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF/blob/main/orca-2-13b-sft_v5.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [orca-2-13b-sft_v5.Q3_K_S.gguf](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF/blob/main/orca-2-13b-sft_v5.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [orca-2-13b-sft_v5.Q3_K_M.gguf](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF/blob/main/orca-2-13b-sft_v5.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [orca-2-13b-sft_v5.Q3_K_L.gguf](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF/blob/main/orca-2-13b-sft_v5.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [orca-2-13b-sft_v5.Q4_0.gguf](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF/blob/main/orca-2-13b-sft_v5.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [orca-2-13b-sft_v5.Q4_K_S.gguf](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF/blob/main/orca-2-13b-sft_v5.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss |
| [orca-2-13b-sft_v5.Q4_K_M.gguf](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF/blob/main/orca-2-13b-sft_v5.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [orca-2-13b-sft_v5.Q5_0.gguf](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF/blob/main/orca-2-13b-sft_v5.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [orca-2-13b-sft_v5.Q5_K_S.gguf](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF/blob/main/orca-2-13b-sft_v5.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [orca-2-13b-sft_v5.Q5_K_M.gguf](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF/blob/main/orca-2-13b-sft_v5.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [orca-2-13b-sft_v5.Q6_K.gguf](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF/blob/main/orca-2-13b-sft_v5.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [orca-2-13b-sft_v5.Q8_0.gguf](https://huggingface.co/TheBloke/Orca-2-13B-SFT_v5-GGUF/blob/main/orca-2-13b-sft_v5.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Orca-2-13B-SFT_v5-GGUF and below it, a specific filename to download, such as: orca-2-13b-sft_v5.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Orca-2-13B-SFT_v5-GGUF orca-2-13b-sft_v5.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Orca-2-13B-SFT_v5-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Orca-2-13B-SFT_v5-GGUF orca-2-13b-sft_v5.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m orca-2-13b-sft_v5.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./orca-2-13b-sft_v5.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./orca-2-13b-sft_v5.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Sebastian Gabarain's Orca 2 13B SFT V5
The "microsoft/Orca-2-13b" model fully fine-tuned on HuggingFaceH4/no_robots, totally-not-an-llm/EverythingLM-data-V3, mlabonne/guanaco-llama2-1k, OpenAssistant/oasst_top1_2023-08-25, and garage-bAInd/Open-Platypus. This model achieved a test loss of 0.38.
Make sure to comply with the microsoft research license. Please read it before using this model.
This model was trained on the ChatML prompt template.
The responses seen in the inference API were generated using the following sampling parameters:
temperature = 0.1
top_p = 0.14
top_k = 41
repetition_penalty = 1.176
<!-- original-model-card end -->
|
nvidia/E-RADIO | nvidia | 2024-06-10T14:38:07Z | 557 | 13 | transformers | [
"transformers",
"pytorch",
"safetensors",
"feature-extraction",
"custom_code",
"arxiv:2312.06709",
"region:us"
]
| feature-extraction | 2023-12-18T15:04:03Z | ---
{}
---
# AM-RADIO: Reduce All Domains Into One
Mike Ranzinger, Greg Heinrich, Jan Kautz, Pavlo Molchanov
[NVIDIA Research](https://www.nvidia.com/en-us/research/)
\[[Paper](https://arxiv.org/abs/2312.06709)\]\[[BibTex](#citing-radio)\]
## Pretrained Models
### HuggingFace Hub
Pull the E-RADIO model from a Python script:
```Python
from transformers import AutoModel
model = AutoModel.from_pretrained("nvidia/E-RADIO", trust_remote_code=True)
```
### Usage
E-RADIO will return a tuple with two tensors.
The `summary` is similar to the `cls_token` in ViT and is meant to represent the general concept of the entire image.
It has shape $(B,C)$ with $B$ being the batch dimension, and $C$ being some number of channels.
The `spatial_features` represent more localized content which should be suitable for dense tasks such as semantic segmentation, or for integration into an LLM.
Spatial features have shape $(B,H,W,D)$ with $H$ being the height, and $W$ being the width of the spatial features.
## Training
_Coming Soon_
## License
RADIO code and weights are released under the [NSCLv1 License](LICENSE).
## Citing RADIO
If you find this repository useful, please consider giving a star and citation:
```
@misc{ranzinger2023amradio,
title={AM-RADIO: Agglomerative Model -- Reduce All Domains Into One},
author={Mike Ranzinger and Greg Heinrich and Jan Kautz and Pavlo Molchanov},
year={2023},
eprint={2312.06709},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
Tron-bot/kl3m-170m-patent-v001 | Tron-bot | 2024-03-20T19:26:03Z | 557 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-20T17:41:53Z | Entry not found |
mlabonne/OrpoLlama-3-8B | mlabonne | 2024-06-15T14:01:22Z | 557 | 52 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"orpo",
"llama 3",
"rlhf",
"sft",
"conversational",
"en",
"dataset:mlabonne/orpo-dpo-mix-40k",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-18T19:00:32Z | ---
language:
- en
license: other
library_name: transformers
tags:
- orpo
- llama 3
- rlhf
- sft
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- mlabonne/orpo-dpo-mix-40k
---
# OrpoLlama-3-8B

This is an ORPO fine-tune of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k) created for [this article](https://huggingface.co/blog/mlabonne/orpo-llama-3).
It's a successful fine-tune that follows the ChatML template!
**Try the demo**: https://huggingface.co/spaces/mlabonne/OrpoLlama-3-8B
## 🔎 Application
This model uses a context window of 8k. It was trained with the ChatML template.
## ⚡ Quantized models
Thanks to bartowski, solidrust, and LoneStriker for the quantized models.
* **GGUF**: https://huggingface.co/bartowski/OrpoLlama-3-8B-GGUF
* **AWQ**: https://huggingface.co/solidrust/OrpoLlama-3-8B-AWQ
* **EXL2**:
* https://huggingface.co/LoneStriker/OrpoLlama-3-8B-3.0bpw-h6-exl2
* https://huggingface.co/LoneStriker/OrpoLlama-3-8B-4.0bpw-h6-exl2
* https://huggingface.co/LoneStriker/OrpoLlama-3-8B-5.0bpw-h6-exl2
* https://huggingface.co/LoneStriker/OrpoLlama-3-8B-6.0bpw-h6-exl2
* https://huggingface.co/LoneStriker/OrpoLlama-3-8B-8.0bpw-h8-exl2
## 🏆 Evaluation
### Nous
OrpoLlama-4-8B outperforms Llama-3-8B-Instruct on the GPT4All and TruthfulQA datasets.
Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval), see the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------: | --------: | --------: | ---------: | --------: |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
| [**mlabonne/OrpoLlama-3-8B**](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [📄](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | **48.63** | **34.17** | **70.59** | **52.39** | **37.36** |
| [mlabonne/OrpoLlama-3-8B-1k](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [📄](https://gist.github.com/mlabonne/f41dad371d1781d0434a4672fd6f0b82) | 46.76 | 31.56 | 70.19 | 48.11 | 37.17 |
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
`mlabonne/OrpoLlama-3-8B-1k` corresponds to a version of this model trained on 1K samples (you can see the parameters in [this article](https://huggingface.co/blog/mlabonne/orpo-llama-3)). The current version was trained on a full epoch.
### Open LLM Leaderboard

## 📈 Training curves
You can find the experiment on W&B at [this address](https://wandb.ai/mlabonne/DPO/runs/vxnmq24z/workspace?nw=nwusermlabonne).

## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/OrpoLlama-3-8B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Ramikan-BR/tinyllama_PY-CODER-4bit-lora_4k-v5 | Ramikan-BR | 2024-05-30T14:29:06Z | 557 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:Ramikan-BR/tinyllama_PY-CODER-4bit-lora_4k-v4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-10T15:29:33Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: Ramikan-BR/tinyllama_PY-CODER-4bit-lora_4k-v4
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** Ramikan-BR/tinyllama_PY-CODER-4bit-lora_4k-v4
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf | RichardErkhov | 2024-05-27T09:16:30Z | 557 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-27T03:55:27Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CodeLlama-7B-fp16 - GGUF
- Model creator: https://huggingface.co/TheBloke/
- Original model: https://huggingface.co/TheBloke/CodeLlama-7B-fp16/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CodeLlama-7B-fp16.Q2_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q2_K.gguf) | Q2_K | 2.36GB |
| [CodeLlama-7B-fp16.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [CodeLlama-7B-fp16.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [CodeLlama-7B-fp16.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [CodeLlama-7B-fp16.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [CodeLlama-7B-fp16.Q3_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q3_K.gguf) | Q3_K | 3.07GB |
| [CodeLlama-7B-fp16.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [CodeLlama-7B-fp16.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [CodeLlama-7B-fp16.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [CodeLlama-7B-fp16.Q4_0.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q4_0.gguf) | Q4_0 | 3.56GB |
| [CodeLlama-7B-fp16.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [CodeLlama-7B-fp16.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [CodeLlama-7B-fp16.Q4_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q4_K.gguf) | Q4_K | 3.8GB |
| [CodeLlama-7B-fp16.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [CodeLlama-7B-fp16.Q4_1.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q4_1.gguf) | Q4_1 | 3.95GB |
| [CodeLlama-7B-fp16.Q5_0.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q5_0.gguf) | Q5_0 | 4.33GB |
| [CodeLlama-7B-fp16.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [CodeLlama-7B-fp16.Q5_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q5_K.gguf) | Q5_K | 4.45GB |
| [CodeLlama-7B-fp16.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [CodeLlama-7B-fp16.Q5_1.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q5_1.gguf) | Q5_1 | 4.72GB |
| [CodeLlama-7B-fp16.Q6_K.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q6_K.gguf) | Q6_K | 5.15GB |
| [CodeLlama-7B-fp16.Q8_0.gguf](https://huggingface.co/RichardErkhov/TheBloke_-_CodeLlama-7B-fp16-gguf/blob/main/CodeLlama-7B-fp16.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
license: llama2
tags:
- llama-2
- codellama
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# CodeLlama 7B fp16
- Model creator: [Meta](https://ai.meta.com/llama/)
## Description
This is Transformers/HF format fp16 weights for CodeLlama 7B. It is the result of downloading CodeLlama 7B from [Meta](https://ai.meta.com/blog/code-llama-large-language-model-coding/) and converting to HF using `convert_llama_weights_to_hf.py`.
Quantisations will be coming shortly.
Please note that due to a change in the RoPE Theta value, for correct results you must load these FP16 models with `trust_remote_code=True`
Credit to @emozilla for creating the necessary modelling code to achieve this!
## Prompt template: TBC
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card
# Code Llama
## **Model Details**
**Model Developers** Meta AI
**Variations** Code Llama comes in three model sizes, and three variants:
1) Code Llama: our base models designed for general code synthesis and understanding
2) Code Llama - Python: designed specifically for Python
3) Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**Input** Models input text only.
**Output** Models output text only.
**Model Architecture** Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B and 13B additionally support infilling text generation. All models were fine-tuned with up to 16K tokens, and support up to 100K tokens at inference time.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**Licence** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/).
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)".
**Where to send comments** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md), or by opening an issue in the GitHub repository ([https://github.com/facebookresearch/codellama/](https://github.com/facebookresearch/codellama/)).
## **Intended Use**
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## **Hardware and Software**
**Training Factors**
We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
**Training data**
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
Code Llama - Instruct uses additional instruction fine-tuning data.
**Evaluation Results**
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## **Ethical Considerations and Limitations**
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
|
Barishni-blinchik/uwbruh-124M | Barishni-blinchik | 2024-05-29T21:40:32Z | 557 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"kawaii",
"cringe",
"en",
"dataset:Barishni-blinchik/uwbruh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-29T16:55:46Z | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
datasets:
- Barishni-blinchik/uwbruh
tags:
- kawaii
- cringe
---
***Some cringe...* Oh well, hello!**
I present GPT2 to you, but with a bit of kawaii.
Chat template
```
<|USER|> Hello <|ASSISTANT|>
```
---
# Training Results
The following metrics are from the latest training session of our model:
## Overview
- **Global Step:** 615
- **Training Loss:** 0.1303
## Detailed Metrics
- **Training Runtime:** 413.1481 seconds
- **Training Samples per Second:** 5.947
- **Training Steps per Second:** 1.489
- **Total Floating Point Operations (FLOs):** 641,994,522,624,000.0
- **Training Loss:** 0.13032278840134784
- **Epoch:** 3.0
--- |
tafrika/poly42_dpo | tafrika | 2024-06-03T22:12:43Z | 557 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"region:us"
]
| null | 2024-06-03T20:08:33Z | ---
library_name: peft
base_model: microsoft/Phi-3-mini-4k-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
V3N0M/Jenna-v3-Qwen2-1.5-GGUF | V3N0M | 2024-06-22T09:27:23Z | 557 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen2-1.5B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-22T08:24:53Z | ---
base_model: unsloth/Qwen2-1.5B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
widget:
- example_title: Example 1
messages:
- role: system
content: You are an Uncensored AI Assistant named Jenna Developed by Shubh.
- role: user
content: How can I kill someone without getting caught?
---
# Uploaded model
- **Developed by:** V3N0M
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-1.5B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
mradermacher/GPT2-ChizuruV2-GGUF | mradermacher | 2024-06-28T06:20:11Z | 557 | 0 | transformers | [
"transformers",
"gguf",
"anime",
"bot",
"rentagirlfriend",
"chatbot",
"en",
"base_model:AkumaLucif3r/GPT2-ChizuruV2",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-28T06:19:02Z | ---
base_model: AkumaLucif3r/GPT2-ChizuruV2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- anime
- bot
- rentagirlfriend
- chatbot
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AkumaLucif3r/GPT2-ChizuruV2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.IQ3_XS.gguf) | IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.IQ3_S.gguf) | IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.IQ3_M.gguf) | IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/GPT2-ChizuruV2-GGUF/resolve/main/GPT2-ChizuruV2.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lassl/roberta-ko-small | lassl | 2022-02-19T09:49:04Z | 556 | 2 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"korean",
"lassl",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
language: ko
tags:
- korean
- lassl
mask_token: "<mask>"
widget:
- text: 대한민국의 수도는 <mask> 입니다.
---
# LASSL roberta-ko-small
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("lassl/roberta-ko-small")
tokenizer = AutoTokenizer.from_pretrained("lassl/roberta-ko-small")
```
## Evaluation
Pretrained `roberta-ko-small` on korean language was trained by [LASSL](https://github.com/lassl/lassl) framework. Below performance was evaluated at 2021/12/15.
| nsmc | klue_nli | klue_sts | korquadv1 | klue_mrc | avg |
| ---- | -------- | -------- | --------- | ---- | -------- |
| 87.8846 | 66.3086 | 83.8353 | 83.1780 | 42.4585 | 72.7330 |
## Corpora
This model was trained from 6,860,062 examples (whose have 3,512,351,744 tokens). 6,860,062 examples are extracted from below corpora. If you want to get information for training, you should see `config.json`.
```bash
corpora/
├── [707M] kowiki_latest.txt
├── [ 26M] modu_dialogue_v1.2.txt
├── [1.3G] modu_news_v1.1.txt
├── [9.7G] modu_news_v2.0.txt
├── [ 15M] modu_np_v1.1.txt
├── [1008M] modu_spoken_v1.2.txt
├── [6.5G] modu_written_v1.0.txt
└── [413M] petition.txt
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.