modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
timm/pit_ti_distilled_224.in1k | timm | 2023-04-26T00:08:31Z | 556 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.16302",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-26T00:08:25Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for pit_ti_distilled_224.in1k
A PiT (Pooling based Vision Transformer) image classification model. Trained on ImageNet-1k with token based distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 5.1
- GMACs: 0.7
- Activations (M): 6.2
- Image size: 224 x 224
- **Papers:**
- Rethinking Spatial Dimensions of Vision Transformers: https://arxiv.org/abs/2103.16302
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/naver-ai/pit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('pit_ti_distilled_224.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'pit_ti_distilled_224.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 27, 27])
# torch.Size([1, 128, 14, 14])
# torch.Size([1, 256, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'pit_ti_distilled_224.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2, 256) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{heo2021pit,
title={Rethinking Spatial Dimensions of Vision Transformers},
author={Byeongho Heo and Sangdoo Yun and Dongyoon Han and Sanghyuk Chun and Junsuk Choe and Seong Joon Oh},
booktitle = {International Conference on Computer Vision (ICCV)},
year={2021},
}
```
|
timm/pit_xs_distilled_224.in1k | timm | 2023-04-26T00:08:51Z | 556 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.16302",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-26T00:08:43Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for pit_xs_distilled_224.in1k
A PiT (Pooling based Vision Transformer) image classification model. Trained on ImageNet-1k with token based distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 11.0
- GMACs: 1.4
- Activations (M): 7.8
- Image size: 224 x 224
- **Papers:**
- Rethinking Spatial Dimensions of Vision Transformers: https://arxiv.org/abs/2103.16302
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/naver-ai/pit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('pit_xs_distilled_224.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'pit_xs_distilled_224.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 27, 27])
# torch.Size([1, 192, 14, 14])
# torch.Size([1, 384, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'pit_xs_distilled_224.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{heo2021pit,
title={Rethinking Spatial Dimensions of Vision Transformers},
author={Byeongho Heo and Sangdoo Yun and Dongyoon Han and Sanghyuk Chun and Junsuk Choe and Seong Joon Oh},
booktitle = {International Conference on Computer Vision (ICCV)},
year={2021},
}
```
|
TheBloke/OpenOrca_Stx-GGUF | TheBloke | 2023-09-27T12:49:25Z | 556 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"ja",
"dataset:snow_simplified_japanese_corpus",
"dataset:khalidalt/tydiqa-goldp",
"dataset:csebuetnlp/xlsum",
"base_model:lightblue/openorca_stx",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-09-16T06:33:51Z | ---
language:
- ja
license: llama2
datasets:
- snow_simplified_japanese_corpus
- khalidalt/tydiqa-goldp
- csebuetnlp/xlsum
model_name: OpenOrca Stx
base_model: lightblue/openorca_stx
inference: false
model_creator: Lightblue Technology Inc.
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# OpenOrca Stx - GGUF
- Model creator: [Lightblue Technology Inc.](https://huggingface.co/lightblue)
- Original model: [OpenOrca Stx](https://huggingface.co/lightblue/openorca_stx)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Lightblue Technology Inc.'s OpenOrca Stx](https://huggingface.co/lightblue/openorca_stx).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/OpenOrca_Stx-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OpenOrca_Stx-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF)
* [Lightblue Technology Inc.'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lightblue/openorca_stx)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [openorca_stx.Q2_K.gguf](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF/blob/main/openorca_stx.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [openorca_stx.Q3_K_S.gguf](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF/blob/main/openorca_stx.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [openorca_stx.Q3_K_M.gguf](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF/blob/main/openorca_stx.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [openorca_stx.Q3_K_L.gguf](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF/blob/main/openorca_stx.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [openorca_stx.Q4_0.gguf](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF/blob/main/openorca_stx.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [openorca_stx.Q4_K_S.gguf](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF/blob/main/openorca_stx.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [openorca_stx.Q4_K_M.gguf](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF/blob/main/openorca_stx.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [openorca_stx.Q5_0.gguf](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF/blob/main/openorca_stx.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [openorca_stx.Q5_K_S.gguf](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF/blob/main/openorca_stx.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [openorca_stx.Q5_K_M.gguf](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF/blob/main/openorca_stx.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [openorca_stx.Q6_K.gguf](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF/blob/main/openorca_stx.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [openorca_stx.Q8_0.gguf](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF/blob/main/openorca_stx.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/OpenOrca_Stx-GGUF and below it, a specific filename to download, such as: openorca_stx.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/OpenOrca_Stx-GGUF openorca_stx.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/OpenOrca_Stx-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/OpenOrca_Stx-GGUF openorca_stx.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m openorca_stx.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/OpenOrca_Stx-GGUF", model_file="openorca_stx.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Lightblue Technology Inc.'s OpenOrca Stx
# About
This model is Lightblue's QLoRA finetune of OpenOrca's [Open-Orca/OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) model on Japanese fine-tuning datasets.
This model specialises on answering **Closed Question Answering** in Japanese. Input a piece of reference text, ask a question, and see the model answer based on the reference text.
We trained on equal samples of the following three datasets:
* [SNOW](https://huggingface.co/datasets/snow_simplified_japanese_corpus)
* [TyDiQA (Ja)](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
* [XLSUM (Ja)](https://huggingface.co/datasets/csebuetnlp/xlsum)
which resulted in a dataset of 13,167 samples total.
These three datasets were chosen as they represent three distinct fine-tuning tasks (Text simplification, question answering, and text summarization, respectively) which we hypothesize can help to improve the language models suitability for dealing with Japanese data.
These three datasets make up the model name: STX.
With these datasets, we achieve the following scores on the JGLUE benchmark:
| Model Name | Open-Orca/OpenOrcaxOpenChat-Preview2-13B | lightblue/openorca_stx |
|------------------------|------------------------------------------|------------------------|
| jsquad-1.1-0.3 | 0.692 | 0.836 |
| jcommonsenseqa-1.1-0.3 | 0.831 | 0.782 |
| jnli-1.1-0.3 | 0.504 | 0.48 |
| marc_ja-1.1-0.3 | 0.936 | 0.959 |
Our model achieves much better results on the question answering benchmark (JSQuAD) than the base checkpoint without monstrous degradation of performance on multi-choice question benchmarks (JCommonSense, JNLI, MARC-Ja) purely through QLoRA training.
This shows the potential for applying strong language models such as [Open-Orca/OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) to minimal QLoRA fine-tuning using Japanese fine-tuning datasets to achieve better results at narrow NLP tasks.
# How to use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = AutoModelForCausalLM.from_pretrained(
model_dir, torch_dtype=torch.bfloat16, device_map='auto',
)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
def do_closed_qa(context, question):
return context + "\n\n" + question
test_article = """ モノマネのレパートリーに「リーチ・マイケル選手」があるレイザーラモンRGさん。本人公認のモノマネですが、ラグビーファンの反応に少し驚いたそうです。
リーチ・マイケル選手のモノマネは、何がきっかけですか。
「2015年のワールドカップ(W杯)イングランド大会で日本が南アフリカを倒した次の日が、京都での番組ロケでした。当時は、アップルの共同創業者スティーブ・ジョブズのモノマネばかりでしたが、一緒にロケをしていたジャングルポケットから『リーチ・マイケルに似てますよ。ジョブズのまま、いけるんじゃないですか?』と言われたのが始まりです」
「ただ、みんな知識がない。ラグビーショップを探し、日本代表のユニホームが売り切れだったので、赤っぽいユニホームとピチピチの短パンをはいて。とりあえずSNSで『リーチ・マイケルです』っていっぱい写真を載せました」
「すると、それを見たリーチさん本人からDM(ダイレクトメッセージ)が届きました。『モノマネありがとうございます。もしモノマネをするなら、僕のユニホームを送りますので着てください』と。W杯後にユニホーム2着とパンツやソックスなどをほんまに送ってきてくれました。今着ているのがそれです」
これまで、数々の著名人をモノマネしてこられました。リーチ選手のネタの反響はいかがでしたか。
「僕はラグビー経験がないですし、ラグビーを全然知らなかったけど、やっぱり本人からユニホームを頂いてるっていう“印籠(いんろう)”みたいなのがあって。『あいつはリーチさん本人に認められてる』と。一目置かれているのかなと感じます」
「やっていることは、見た目を本人に寄せてワンチームって言うだけなんですけどね。それでも『わあ、リーチさんだ』と言ってもらえます」
「リーチさんと実際に会うことなんて、簡単にはできないじゃないですか。でも、リーチさんのまねをしているRGには会えたわ、みたいな(笑)。何だろうな、有名な神社の支社のような存在ですかね。ありがたがられるという意味では他のモノマネとはすごく違いますね」
"""
test_question = " リーチ・マイケルは何を送ってきましたか?"
pipe(do_closed_qa(test_article, question), max_new_tokens=128, temperature=0)[0]["generated_text"]
# "ユニホーム2着とパンツやソックスなど"
```
# Training details
This model was trained for 1000 steps (1.2 epochs) with the model being evaluated every 50 steps. We then chose the best model from these evaluations based on validation loss.
We used the [qlora](https://github.com/artidoro/qlora) package from artidoro.
We trained with the following hyperparameters:
```
Per device evaluation batch size: 16
Per device train batch size: 8
LoRA (lora_r): 64
LoRA alpha (lora_alpha): 16
LoRA modules: all
Double quantization: Enabled
Quantization type: nf4
BF16: Enabled
Bits: 4
Warmup ratio: 0.03
Learning rate scheduler type: Constant
Gradient checkpointing: Enabled
Gradient accumulation steps: 2
Learning rate: 0.0002
Adam beta2: 0.999
Maximum gradient norm: 0.3
LoRA dropout: 0.05
Weight decay: 0.0
```


<!-- original-model-card end -->
|
projecte-aina/FLOR-760M | projecte-aina | 2024-01-19T13:39:12Z | 556 | 1 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"FLOR",
"spanish",
"catalan",
"english",
"en",
"es",
"ca",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-12-12T09:24:11Z | ---
language:
- en
- es
- ca
licence:
- apache-2.0
tags:
- FLOR
- bloom
- spanish
- catalan
- english
pipeline_tag: text-generation
widget:
- text: |-
Respon a la pregunta següent.
Pregunta: "Quina és la capital de Suècia?"
Resposta: "La capital de Suècia és Estocolm."
----
Respon a la pregunta següent.
Pregunta: "Quina beguda es consumeix als matins per despertar-se?"
Resposta: "La majoria de gent consumeix cafè per despertar-se."
----
Respon a la pregunta següent.
Pregunta: "Explica com funciona un motor de combustió"
Resposta:
example_title: Pregunta-Resposta
- text: |-
Extrae las entidades nombradas del siguiente texto:
Texto: "Me llamo Wolfgang y vivo en Berlin"
Entidades: Wolfgang:PER, Berlin:LOC
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center"
Entidades: parc güell:LOC, barcelona supercomputing center:LOC
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Maria y Miguel no tienen ningún problema contigo"
Entidades: Maria:PER, Miguel:PER
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Damián se cortó el pelo"
Entidades: Damián:PER
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Lo mejor de Barcelona és el bar de mi amigo Pablo"
Entidades: Pablo:PER, Barcelona:LOC
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Carlos comparte piso con Marc"
Entidades:
example_title: Entidades-Nombradas
---
# FLOR-760M
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-uses-and-limitations)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Evaluation](#evaluation)
- [Additional information](#additional-information)
</details>
## Model description
**FLOR-760M** is a 760M-parameter transformer-based causal language model for Catalan, Spanish, and English.
It is the result of a language adaptation technique performed on [BLOOM-1.1B](https://huggingface.co/bigscience/bloom-1b1),
which involves modifying the model's vocabulary and embedding layer and continuously pre-training the model with 26B tokens in our target languages.
For more details, take a look at [this blogpost](https://medium.com/@mpamies247/flor-6-3b-a-chinchilla-compliant-model-for-catalan-spanish-and-english-7cdb389a9aac) about the project.
## Intended uses and limitations
The **FLOR-760M** model is ready-to-use only for causal language modeling.
It can perform text-generation tasks and be fine-tuned for specific scenarios.
## How to use
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
input_text = "Sovint em trobo pensant en tot allò que"
model_id = "projecte-aina/FLOR-760M"
tokenizer = AutoTokenizer.from_pretrained(model_id)
generator = pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
generation = generator(
input_text,
do_sample=True,
top_k=10,
eos_token_id=tokenizer.eos_token_id,
)
print(f"Result: {generation[0]['generated_text']}")
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model.
However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques
on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Language adaptation and training
The language adaptation technique used to create FLOR-760M requires the vocabulary of the source model
to be adapted before continuing its pre-training with data in the target languages. Specifically, we proceeded as follows:
1) We trained our own BPE tokenizer for Catalan, Spanish, and English, and replaced the original BLOOM tokenizer and vocabulary with it. This procedure implied a downsizing of the original BLOOM's embedding layer and, therefore, a model compression from 1.1B parameters to 760M.
2) The embeddings corresponding to tokens that are present in both the original and the target vocabulary (matching tokens) were used for initialization.
3) The embeddings from tokens not present in BLOOM's original vocabulary were initialized as the average of all embeddings.
4) The model was initialized with the weights from BOOM-1.1B, and with our adapted tokenizer (step 1) and embeddings (steps 2-3).
5) The model was then trained on a corpus that contains a mixture of Catalan, Spanish, and English data.
### Training data
The training corpus is the same that was used to train [Ǎguila-7B](https://huggingface.co/projecte-aina/aguila-7b).
It consists of 26B tokens of several corpora gathered from web crawlings and public domain data.
| Dataset | Language | Words (per-epoch) | Epochs |
|---------------------|----------|--------------------|--------------|
| Wikipedia | en | 2169.97M | 1.428144485 |
| C4_es | es | 53709.80M | 0.1049686196 |
| Biomedical | es | 455.03M | 0.7140722425 |
| Legal | es | 995.70M | 0.7140722425 |
| Wikipedia | es | 693.60M | 1.428144485 |
| Gutenberg | es | 53.18M | 0.7140722425 |
| C4_ca | ca | 2826.00M | 2.142216727 |
| Biomedical | ca | 11.80M | 1.428144485 |
| RacoCatalà Noticias | ca | 17.16M | 2.142216727 |
| RacoCatalà Forums | ca | 333.73M | 2.142216727 |
| CaWaC | ca | 57.79M | 2.142216727 |
| Wikipedia | ca | 228.01M | 3.570361212 |
| Vilaweb | ca | 50.34M | 2.142216727 |
### Languages
The training data has the same amount of Catalan and Spanish texts, and a smaller amount of English data.
The table below shows the final language distribution:
|Language|Percentage|
|--------|----------|
| English (EN) | 16.84% |
| Spanish (ES) | 41.38% |
| Catalan (CA) | 41.79% |
### Training hyperparameters
- seed: 1
- distributed_type: [WSE-2](https://www.cerebras.net/product-chip/)
- num_devices: 1
- train_batch_size: 60
- eval_batch_size: 60
- optimizer: AdamW
- betas: (0.9,0.95)
- epsilon: 1e-08
- weight_decay_rate: 0.1
- learning_rate:
- scheduler: "Linear"
initial_learning_rate: 0.0
end_learning_rate: 4.1e-5
steps: 3050
- scheduler: "CosineDecay"
initial_learning_rate: 4.1e-5
end_learning_rate: 3.4e-6
steps: 209133
- scheduler: "Constant"
learning_rate: 2.2e-6
- num_epochs: 1.0
### Framework versions
The training was conducted in a Cerebras' [CS-2 system](https://www.cerebras.net/product-system/)
using the [cs-1.9.1](https://github.com/Cerebras/modelzoo/releases/tag/Release_1.9.1) release of their software.
## Evaluation
FLOR-760M has been evaluated on 5-shot, using EleutherAI's Evaluation Harness implementation, on several datasets in Catalan, Spanish, and English, with particular emphasis on Catalan datasets.
The tasks were chosen to cover several evaluation areas in order to provide a comprehensive overview of the model's capabilities. The baselines used to compare our results are multilingual and English open-source 1.3B models: mGPT-1.3B, GPT-Neo-1.3B, Pythia-1.4B, OPT-1.3B, Falcon-rw-1.3B, and Cerebras-GPT-1.3B.
Our implementation of EleutherAI's *LM Evaluation Harness* can be found [here](https://github.com/langtech-bsc/lm-evaluation-harness/tree/FLOR-eval).
The following is a list of evaluation areas and their respective datasets:
- Reading Comprehension: [Belebele](https://huggingface.co/datasets/facebook/belebele)
- Question Answering: [XQuAD](https://huggingface.co/datasets/xquad), [CatalanQA](https://huggingface.co/datasets/projecte-aina/catalanqa), [CoQCat](https://huggingface.co/datasets/projecte-aina/CoQCat)
- Natural Language Inference: [XNLI](https://huggingface.co/datasets/xnli) and its translation to Catalan ([XNLI-ca](https://huggingface.co/datasets/projecte-aina/xnli-ca)), [TE-ca](https://huggingface.co/datasets/projecte-aina/teca)
- Paraphrase Identification: [PAWS-X](https://huggingface.co/datasets/paws-x) and its translation to Catalan ([PAWS-ca](https://huggingface.co/datasets/projecte-aina/PAWS-ca)), [Parafraseja](https://huggingface.co/datasets/projecte-aina/Parafraseja)
- Commonsense Reasoning: [COPA](https://people.ict.usc.edu/~gordon/copa.html) and its translation to Catalan ([COPA-ca](https://huggingface.co/datasets/projecte-aina/COPA-ca))
- Translation: [FLoRes](https://huggingface.co/datasets/flores)
### Reading Comprehension and Questions Answering
| Model | Belebele-ca | Belebele-es | Belebele-en | XQuAD-ca | XQuAD-es | XQuAD-en | CatalanQA | CoQCat |
| ------|:-----------:|:-----------:|:-----------:|:--------:|:--------:|:--------:|:---------:|:------:|
Random | 25.00 | 25.00 | 25.00 | - | - | - | - | - |
mGPT-1.3B | 26.64 | 25.82 | 28.07 | 0.33 | 0.67 | 0.17 | 0.65 | 0.78 |
GPT-Neo-1.3B | 39.55 | 37.50 | 42.83 | 19.75 | 29.77 | 51.53 | 22.34 | 23.57 |
Pythia-1.4B | 38.32 | 36.89 | 44.26 | 26.19 | 34.13 | 52.98 | 27.47 | 25.38 |
OPT-1.3B | 35.86 | 37.09 | 45.49 | 23.53 | 31.85 | 52.95 | 26.58 | 20.18 |
Falcon-rw-1.3B | 34.84 | 35.66 | **50.61** | 5.93 | 19.25 | **58.60** | 6.91 | 15.61 |
Cerebras-GPT-1.3B | 32.79 | 31.76 | 35.04 | 8.56 | 19.98 | 36.00 | 10.87 | 14.12 |
BLOOM-1.1B | 39.34 | 38.32 | 41.19 | 36.81 | 36.98 | 44.10 | 44.65 | 34.57 |
FLOR-760M | **41.19** | **39.55** | 36.68 | **41.10** | **41.11** | 40.20 | **51.01** | **41.34** |
### Natural Language Inference and Paraphrase Identification
| Model | XNLI-ca | XNLI-es | XNLI-en | TECA-ca | PAWS-X-ca | PAWS-X-es | PAWS-X-en | Parafraseja |
| ------|:-------:|:-------:|:-------:|:-------:|:---------:|:---------:|:---------:|:-----------:|
Random | 33.33 | 33.33 | 33.33 | 33.33 | 50.00 | 50.00 | 50.00 | 50.00 |
mGPT-1.3B | 40.06 | 43.81 | 45.67 | 37.03 | 51.00 | 52.30 | 56.15 | 51.32 |
GPT-Neo-1.3B | 41.44 | 45.57 | 49.92 | 35.38 | 54.65 | 53.40 | 54.60 | 51.70 |
Pythia-1.4B | 42.46 | 45.61 | 51.00 | 37.46 | 54.15 | 52.50 | **57.70** | 55.23 |
OPT-1.3B | 40.08 | 44.53 | **52.48** | 36.14 | 54.10 | 52.55 | 55.90 | 53.23 |
Falcon-rw-1.3B | 34.53 | 35.85 | 45.73 | 34.96 | 54.25 | **54.05** | 53.65 | 50.60 |
Cerebras-GPT-1.3B | 36.83 | 38.88 | 47.25 | 35.62 | 52.40 | 52.20 | 55.95 | 52.05 |
BLOOM-1.1B | **47.19** | **46.39** | 49.44 | 41.38 | **55.05** | 54.05 | 54.75 | 55.65 |
FLOR-760M | 46.93 | 46.03 | 46.11 | **42.14** | 52.35 | 52.50 | 54.85 | **56.55** |
### Commonsense Reasoning and Translation
| Model | XStoryCloze-es | XStoryCloze-en | COPA-ca | COPA-en | FloRes (ca->es) | FloRes (es->ca) | FloRes (ca->en) | FloRes (en->ca) | FloRes (es->en) | FloRes (en->es) |
| ------|:--------------:|:--------------:|:-------:|:-------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
Random | 50.00 | 50.00 | 50.00 | 50.00 | - | - | - | - | - | - |
mGPT-1.3B | 55.33 | 60.09 | 52.20 | 63.40 | 3.25 | 2.96 | 9.25 | 3.79 | 17.75 | 15.34 |
GPT-Neo-1.3B | 51.42 | 66.58 | 53.40 | 74.80 | 3.27 | 3.80 | 17.77 | 5.49 | 17.70 | 12.04 |
Pythia-1.4B | 54.14 | 68.37 | 52.20 | 78.60 | 9.68 | 5.74 | 24.03 | 11.10 | 21.50 | 15.04 |
OPT-1.3B | 53.94 | 69.95 | 52.60 | 76.20 | 3.14 | 3.52 | 15.39 | 2.00 | 16.33 | 6.53 |
Falcon-rw-1.3B | 51.09 | **71.34** | 52.40 | **79.60** | 3.03 | 3.59 | 8.89 | 3.01 | 14.17 | 6.50 |
Cerebras-GPT-1.3B | 49.11 | 60.62 | 51.40 | 66.80 | 2.42 | 1.81 | 2.69 | 0.82 | 3.36 | 1.77 |
BLOOM-1.1B | 57.91 | 62.48 | 62.80 | 66.40 | 21.62 | 15.28 | 31.16 | 21.28 | **20.92** | 16.84 |
FLOR-760M | **61.42** | 61.42 | **65.40** | 64.20 | **22.62** | **15.77** | **32.26** | **26.04** | 20.91 | **18.08** |
## Additional information
### Author
The Language Technologies Unit from Barcelona Supercomputing Center.
### Contact
For further information, please send an email to <[email protected]>.
### Copyright
Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Disclaimer
<details>
<summary>Click to expand</summary>
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
Be aware that the model may have biases and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it)
or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and,
in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the model (Barcelona Supercomputing Center)
be liable for any results arising from the use made by third parties.
</details> |
fatgong/5HeApiVxkbex1RQdUDw31cFurcheu4zs7CgZbvypj1WQEBii_vgg | fatgong | 2024-03-27T22:48:19Z | 556 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-03-09T14:16:48Z | Entry not found |
llm-jp/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0 | llm-jp | 2024-04-30T02:38:17Z | 556 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"dataset:databricks/databricks-dolly-15k",
"dataset:llm-jp/databricks-dolly-15k-ja",
"dataset:llm-jp/oasst1-21k-en",
"dataset:llm-jp/oasst1-21k-ja",
"dataset:llm-jp/oasst2-33k-en",
"dataset:llm-jp/oasst2-33k-ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-23T03:10:58Z | ---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
library_name: transformers
pipeline_tag: text-generation
inference: false
datasets:
- databricks/databricks-dolly-15k
- llm-jp/databricks-dolly-15k-ja
- llm-jp/oasst1-21k-en
- llm-jp/oasst1-21k-ja
- llm-jp/oasst2-33k-en
- llm-jp/oasst2-33k-ja
---
# llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0
This repository provides large language models developed by [LLM-jp](https://llm-jp.nii.ac.jp/), a collaborative project launched in Japan.
| Model Variant |
| :--- |
|**Instruction models**|
| [llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) |
| [llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) |
| [llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) |
| |
| :--- |
|**Pre-trained models**|
| [llm-jp-13b-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-v2.0) |
Checkpoints format: Hugging Face Transformers
## Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0")
model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0", device_map="auto", torch_dtype=torch.bfloat16)
chat = [
{"role": "system", "content": "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。"},
{"role": "user", "content": "自然言語処理とは何か"},
]
tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(
tokenized_input,
max_new_tokens=100,
do_sample=True,
top_p=0.95,
temperature=0.7,
repetition_penalty=1.05,
)[0]
print(tokenizer.decode(output))
```
## Model Details
- **Model type:** Transformer-based Language Model
- **Total seen tokens:** 256B
|Model|Params|Layers|Hidden size|Heads|Context length|
|:---:|:---:|:---:|:---:|:---:|:---:|
|13b model|13b|40|5120|40|4096|
## Training
- **Pre-training:**
- **Hardware:** 128 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/))
- **Software:** Megatron-LM
- **Instruction tuning:**
- **Hardware:** 8 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/))
- **Software:** [TRL](https://github.com/huggingface/trl) and [DeepSpeed](https://github.com/microsoft/DeepSpeed)
## Tokenizer
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
The vocabulary entries were converted from [`llm-jp-tokenizer v2.2 (100k: code20K_en40K_ja60K.ver2.2)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.2).
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model
- **Training algorithm:** Marging Code/English/Japanese vocabularies constructed with SentencePiece Unigram byte-fallback and reestimating scores with the EM-algorithm.
- **Training data:** A subset of the datasets for model pre-training
- **Vocabulary size:** 96,867 (mixed vocabulary of Japanese, English, and source code)
- The acutal size of vocabulary in the pretrained model is 97,024 due to round-up to multiples of 256.
## Datasets
### Pre-training
The models have been pre-trained using a blend of the following datasets.
| Language | Dataset | Tokens|
|:---|:---|---:|
|Japanese|[Wikipedia](https://huggingface.co/datasets/wikipedia)|1.4B
||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v2)|130.7B
|English|[Wikipedia](https://huggingface.co/datasets/wikipedia)|4.7B
||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|110.3B
|Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|8.7B
### Instruction tuning
The models have been fine-tuned on the following datasets.
| Language | Dataset | description |
|:---|:---|:---|
|Japanese|[ichikara-instruction-004-001](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed Japanese instruction dataset |
| |[answer-carefully-001](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/)| A manually constructed Japanese instruction dataset focusing on LLMs' safety |
| |[databricks-dolly-15k-ja](https://huggingface.co/datasets/llm-jp/databricks-dolly-15k-ja)| [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) translated into Japanese using DeepL |
| |[oasst1-21k-ja](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) translated into Japanese using DeepL |
| |[oasst2-33k-ja](https://huggingface.co/datasets/llm-jp/oasst2-33k-ja)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) translated into Japanese using DeepL |
|English |[databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | - |
| |[oasst1-21k-en](https://huggingface.co/datasets/llm-jp/oasst1-21k-en)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) |
| |[oasst2-33k-en](https://huggingface.co/datasets/llm-jp/oasst2-33k-en)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) |
## Evaluation
You can view the evaluation results of several LLMs on this [leaderboard](http://wandb.me/llm-jp-leaderboard). We used [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) (v1.3.0) for the evaluation.
Besides, we used LLM-as-a-judge frameworks, [Japanese Vicuna QA Benchmark](https://github.com/ku-nlp/ja-vicuna-qa-benchmark/) and [Japanese MT Bench](https://github.com/Stability-AI/FastChat/tree/jp-stable/fastchat/llm_judge), for evaluation.
For details, please refer to [our technical blog](https://llm-jp.nii.ac.jp/blog/2024/04/30/v2.0-release.html) (in Japanese).
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Send Questions to
llm-jp(at)nii.ac.jp
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Card Authors
*The names are listed in alphabetical order.*
Namgi Han, Tatsuya Hiraoka, Hirokazu Kiyomaru, Takashi Kodama, and Hiroshi Matsuda. |
QuantFactory/OpenBioLLM-Llama3-8B-GGUF | QuantFactory | 2024-04-29T14:54:39Z | 556 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-28T11:28:48Z | ---
license: llama3
library_name: transformers
pipeline_tag: text-generation
---
# Llama3-OpenBioLLM-8B-GGUF
This is quantized version of [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) created using llama.cpp
## Available Quants
* Q2_K
* Q3_K_L
* Q3_K_M
* Q3_K_S
* Q4_0
* Q4_1
* Q4_K_M
* Q4_K_S
* Q5_0
* Q5_1
* Q5_K_M
* Q5_K_S
* Q6_K
* Q8_0 |
lmstudio-community/Yi-1.5-6B-Chat-GGUF | lmstudio-community | 2024-05-13T02:36:27Z | 556 | 7 | null | [
"gguf",
"text-generation",
"base_model:01-ai/Yi-1.5-6B-Chat",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-05-13T01:07:59Z | ---
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
base_model: 01-ai/Yi-1.5-6B-Chat
lm_studio:
param_count: 6B
use_case: general
release_date: 12-05-2024
model_creator: 01-ai
prompt_template: ChatML
system_prompt: "You are a helpful assistant."
base_model: llama
original_repo: 01-ai/Yi-1.5-6B-Chat
---
## 💫 Community Model> Yi 1.5 6B Chat by 01-ai
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [01-ai](https://huggingface.co/01-ai)<br>
**Original model**: [Yi-1.5-6B-Chat](https://huggingface.co/01-ai/Yi-1.5-6B-Chat)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2854](https://github.com/ggerganov/llama.cpp/releases/tag/b2854)<br>
## Model Summary:
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.<br>
This model should perform well on a wide range of tasks, such as coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.<br>
## Prompt Template:
Choose the `ChatML` preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
<|im_end|>
```
## Technical Details
No technical details have been released about this model.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for the IQ1_M and IQ2_XS quants, which makes them usable even at their tiny size!
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio. |
hallisky/blog-classifier-roberta-large | hallisky | 2024-05-26T08:37:38Z | 556 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-05-26T06:46:29Z | ---
license: apache-2.0
---
|
bartowski/Llama-3-11.5B-V2-GGUF | bartowski | 2024-05-31T06:02:58Z | 556 | 2 | null | [
"gguf",
"text-generation",
"license:other",
"region:us"
]
| text-generation | 2024-05-31T05:38:50Z | ---
license: other
license_name: llama-3
license_link: https://llama.meta.com/llama3/license/
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Llama-3-11.5B-V2
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3024">b3024</a> for quantization.
Original model: https://huggingface.co/Replete-AI/Llama-3-11.5B-V2
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-11.5B-V2-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-Q8_0.gguf) | Q8_0 | 12.24GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-11.5B-V2-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-Q6_K.gguf) | Q6_K | 9.45GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-11.5B-V2-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-Q5_K_M.gguf) | Q5_K_M | 8.19GB | High quality, *recommended*. |
| [Llama-3-11.5B-V2-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-Q5_K_S.gguf) | Q5_K_S | 7.99GB | High quality, *recommended*. |
| [Llama-3-11.5B-V2-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-Q4_K_M.gguf) | Q4_K_M | 7.01GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-11.5B-V2-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-Q4_K_S.gguf) | Q4_K_S | 6.67GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-11.5B-V2-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-IQ4_XS.gguf) | IQ4_XS | 6.31GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-11.5B-V2-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-Q3_K_L.gguf) | Q3_K_L | 6.15GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-11.5B-V2-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-Q3_K_M.gguf) | Q3_K_M | 5.69GB | Even lower quality. |
| [Llama-3-11.5B-V2-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-IQ3_M.gguf) | IQ3_M | 5.34GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-11.5B-V2-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-Q3_K_S.gguf) | Q3_K_S | 5.16GB | Low quality, not recommended. |
| [Llama-3-11.5B-V2-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-IQ3_XS.gguf) | IQ3_XS | 4.94GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-11.5B-V2-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-IQ3_XXS.gguf) | IQ3_XXS | 4.61GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-11.5B-V2-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-Q2_K.gguf) | Q2_K | 4.46GB | Very low quality but surprisingly usable. |
| [Llama-3-11.5B-V2-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-IQ2_M.gguf) | IQ2_M | 4.12GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-11.5B-V2-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-IQ2_S.gguf) | IQ2_S | 3.84GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-11.5B-V2-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-V2-GGUF/blob/main/Llama-3-11.5B-V2-IQ2_XS.gguf) | IQ2_XS | 3.63GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama-3-11.5B-V2-GGUF --include "Llama-3-11.5B-V2-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama-3-11.5B-V2-GGUF --include "Llama-3-11.5B-V2-Q8_0.gguf/*" --local-dir Llama-3-11.5B-V2-Q8_0
```
You can either specify a new local-dir (Llama-3-11.5B-V2-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/IceCaffeLatteRP-7b-GGUF | mradermacher | 2024-05-31T10:49:11Z | 556 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"alpaca",
"mistral",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:icefog72/IceCaffeLatteRP-7b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-31T07:37:03Z | ---
base_model: icefog72/IceCaffeLatteRP-7b
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- alpaca
- mistral
- not-for-all-audiences
- nsfw
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/icefog72/IceCaffeLatteRP-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF/resolve/main/IceCaffeLatteRP-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
abmorton/wall-standard-2-straight | abmorton | 2024-07-01T16:50:28Z | 556 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-07-01T16:43:00Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### wall-standard-2-straight Dreambooth model trained by abmorton with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
KBLab/bert-base-swedish-lowermix-reallysimple-ner | KBLab | 2023-04-19T07:18:42Z | 555 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"sequence-tagger-model",
"sv",
"dataset:KBLab/sucx3_ner",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:04Z | ---
model:
- KB/bert-base-swedish-cased
tags:
- token-classification
- sequence-tagger-model
- bert
language: sv
datasets:
- KBLab/sucx3_ner
widget:
- text: "Emil bor i Lönneberga"
---
# KB-BERT for NER
## Mixed cased and uncased data
This model is based on [KB-BERT](https://huggingface.co/KB/bert-base-swedish-cased) and was fine-tuned on the [SUCX 3.0 - NER](https://huggingface.co/datasets/KBLab/sucx3_ner) corpus, using the _simple_ tags and partially lowercased data.
For this model we used a variation of the data that did **not** use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags.
The model was trained on the training data only, with the best model chosen by its performance on the validation data.
You find more information about the model and the performance on our blog: https://kb-labb.github.io/posts/2022-02-07-sucx3_ner |
r3dhummingbird/DialoGPT-medium-joshua | r3dhummingbird | 2023-03-17T16:58:25Z | 555 | 17 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
thumbnail: https://raw.githubusercontent.com/RuolinZheng08/twewy-discord-chatbot/main/gif-demo/icon.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
uclanlp/visualbert-vcr | uclanlp | 2021-05-31T11:12:33Z | 555 | 0 | transformers | [
"transformers",
"pytorch",
"visual_bert",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05Z | Entry not found |
timm/xcit_medium_24_p8_224.fb_in1k | timm | 2024-02-10T23:43:28Z | 555 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.09681",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-13T02:16:05Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for xcit_medium_24_p8_224.fb_in1k
A XCiT (Cross-Covariance Image Transformer) image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 84.3
- GMACs: 63.5
- Activations (M): 121.2
- Image size: 224 x 224
- **Papers:**
- XCiT: Cross-Covariance Image Transformers: https://arxiv.org/abs/2106.09681
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/xcit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('xcit_medium_24_p8_224.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xcit_medium_24_p8_224.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 785, 512) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{el2021xcit,
title={XCiT: Cross-Covariance Image Transformers},
author={El-Nouby, Alaaeldin and Touvron, Hugo and Caron, Mathilde and Bojanowski, Piotr and Douze, Matthijs and Joulin, Armand and Laptev, Ivan and Neverova, Natalia and Synnaeve, Gabriel and Verbeek, Jakob and others},
journal={arXiv preprint arXiv:2106.09681},
year={2021}
}
```
|
timm/res2net50_26w_8s.in1k | timm | 2023-04-24T00:06:31Z | 555 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1904.01169",
"license:unknown",
"region:us"
]
| image-classification | 2023-04-24T00:05:43Z | ---
tags:
- image-classification
- timm
library_name: timm
license: unknown
datasets:
- imagenet-1k
---
# Model card for res2net50_26w_8s.in1k
A Res2Net (Multi-Scale ResNet) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 48.4
- GMACs: 8.4
- Activations (M): 17.9
- Image size: 224 x 224
- **Papers:**
- Res2Net: A New Multi-scale Backbone Architecture: https://arxiv.org/abs/1904.01169
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/gasvn/Res2Net/
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('res2net50_26w_8s.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'res2net50_26w_8s.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'res2net50_26w_8s.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{gao2019res2net,
title={Res2Net: A New Multi-scale Backbone Architecture},
author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
journal={IEEE TPAMI},
doi={10.1109/TPAMI.2019.2938758},
}
```
|
wsj1995/controlnet-models | wsj1995 | 2023-12-26T12:59:08Z | 555 | 1 | diffusers | [
"diffusers",
"safetensors",
"region:us"
]
| null | 2023-07-27T07:49:18Z | Entry not found |
Mxode/Pythia-70m-Synonym-Sentence-Converter | Mxode | 2023-11-18T19:40:03Z | 555 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"tiny",
"small",
"synonym",
"tool",
"converter",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-08T17:11:36Z | ---
license: apache-2.0
language:
- en
tags:
- tiny
- small
- synonym
- tool
- converter
---
## What's this?
A **tiny** model that can perform **paraphrasing** or **synonym substitution**.
The base model is [pythia-70m](https://huggingface.co/EleutherAI/pythia-70m). This model was fine-tuned with 10 epochs using [Q-Lora](https://github.com/artidoro/qlora) method on my own training set.
## How to use
### quick start
First import the model from hf:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model_name_or_path = 'Mxode/Pythia-70m-C-Language-KnowledgeExtract'
device = 'cuda'
model = GPTNeoXForCausalLM.from_pretrained(model_name_or_path).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
# prompt template
prompt = '<|prompt|>Convert the following passage into synonymous sentences.<|prompt|>\n'
# any text you wish to convert, preferably in complete single sentences.
content = 'The theories and methods of systems science are extensively employed in various domains, including biology, economics, and sociology.'
text = prompt + content
```
Then generate:
```python
inputs = tokenizer(text, return_tensors="pt").to(device)
input_ids = inputs.input_ids
tokens = model.generate(
**inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=100,
do_sample=True,
)
# strip the input
response = tokenizer.decode(tokens[0]).replace(text, "").strip('<|endoftext|>')
# I call it 'Synonymizer' :)
print(f'Synonymizer: {response}')
### output:
### The disciplines of systems science are extensively employed in various domains, including biology, economics, and sociology.
```
Or maybe we'll try some more impossibly trained news? Hmm, get some sports news from espn and try:
```python
### ...
content = 'As both teams exited the court for halftime, Baynes and Mayen were shoulder to shoulder.'
### ...
print(f'Synonymizer: {response}')
### output:
### As the team neets around the court to ease their shifts, Baynes and Middets were partnerly paryyneen.
### sometimes:
### Begantly mastitatively, Baynes and Mayen staged their team rested the Tywindes rested the Tywindes rested the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid the Tywindes laid
```
WELL, as you can see, this is after all only an **experimental tiny model** and with that in mind I can give it a 7.5 out of 10 for performance.
I didn't adjust the hyperparameters, could try [low temperature] + [a bit higher repetition_penalty], the performance might be better.
I'll follow up by training more data on a slightly larger model and hopefully supporting multiple languages. While we all know that bigger models have better generalization capabilities - but smaller models are really cool :)
|
camenduru/DemoFusion | camenduru | 2023-12-07T05:22:15Z | 555 | 4 | diffusers | [
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2023-12-07T05:17:30Z | Entry not found |
MoritzLaurer/xtremedistil-l6-h256-zeroshot-v1.1-all-33 | MoritzLaurer | 2024-01-11T17:10:07Z | 555 | 6 | transformers | [
"transformers",
"pytorch",
"onnx",
"bert",
"text-classification",
"zero-shot-classification",
"en",
"arxiv:2312.17543",
"base_model:microsoft/xtremedistil-l6-h256-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| zero-shot-classification | 2024-01-10T15:18:34Z | ---
base_model: microsoft/xtremedistil-l6-h256-uncased
language:
- en
tags:
- text-classification
- zero-shot-classification
pipeline_tag: zero-shot-classification
library_name: transformers
license: mit
---
# xtremedistil-l6-h256-zeroshot-v1.1-all-33
This model was fine-tuned using the same pipeline as described in
the model card for [MoritzLaurer/deberta-v3-large-zeroshot-v1.1-all-33](https://huggingface.co/MoritzLaurer/deberta-v3-large-zeroshot-v1.1-all-33)
and in this [paper](https://arxiv.org/pdf/2312.17543.pdf).
The foundation model is [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased).
The model only has 22 million backbone parameters and 30 million vocabulary parameters.
The backbone parameters are the main parameters active during inference, providing a significant speedup over larger models.
The model is 25 MB small.
This model was trained to provide a very small and highly efficient zeroshot option,
especially for edge devices or in-browser use-cases with transformers.js.
## Usage and other details
For usage instructions and other details refer to
this model card [MoritzLaurer/deberta-v3-large-zeroshot-v1.1-all-33](https://huggingface.co/MoritzLaurer/deberta-v3-large-zeroshot-v1.1-all-33)
and this [paper](https://arxiv.org/pdf/2312.17543.pdf).
## Metrics:
I didn't not do zeroshot evaluation for this model to save time and compute.
The table below shows standard accuracy for all datasets the model was trained on (note that the NLI datasets are binary).
General takeaway: the model is much more efficient than its larger sisters, but it performs less well.
|Datasets|mnli_m|mnli_mm|fevernli|anli_r1|anli_r2|anli_r3|wanli|lingnli|wellformedquery|rottentomatoes|amazonpolarity|imdb|yelpreviews|hatexplain|massive|banking77|emotiondair|emocontext|empathetic|agnews|yahootopics|biasframes_sex|biasframes_offensive|biasframes_intent|financialphrasebank|appreviews|hateoffensive|trueteacher|spam|wikitoxic_toxicaggregated|wikitoxic_obscene|wikitoxic_identityhate|wikitoxic_threat|wikitoxic_insult|manifesto|capsotu|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|Accuracy|0.894|0.895|0.854|0.629|0.582|0.618|0.772|0.826|0.684|0.794|0.91|0.879|0.935|0.676|0.651|0.521|0.654|0.707|0.369|0.858|0.649|0.876|0.836|0.839|0.849|0.892|0.894|0.525|0.976|0.88|0.901|0.874|0.903|0.886|0.433|0.619|
|Inference text/sec (A10G GPU, batch=128)|4117.0|4093.0|1935.0|2984.0|3094.0|2683.0|5788.0|4926.0|9701.0|6359.0|1843.0|692.0|756.0|5561.0|10172.0|9070.0|7511.0|7480.0|2256.0|3942.0|1020.0|4362.0|4034.0|4185.0|5449.0|2606.0|6343.0|931.0|5550.0|864.0|839.0|837.0|832.0|857.0|4418.0|4845.0|
|
arnavgrg/codellama-70b-instruct-nf4-fp16-upscaled | arnavgrg | 2024-02-04T21:52:25Z | 555 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-04T21:34:29Z | ---
license: apache-2.0
tags:
- text-generation-inference
---
This is an upscaled fp16 variant of the original CodeLlama-70b-instruct base model by Meta after it has been loaded with nf4 4-bit quantization via bitsandbytes.
The main idea here is to upscale the linear4bit layers to fp16 so that the quantization/dequantization cost doesn't have to paid for each forward pass at inference time.
_Note: The quantization operation to nf4 is not lossless, so the model weights for the linear layers are lossy, which means that this model will not work as well as the official base model._
To use this model, you can just load it via `transformers` in fp16:
```python
import torch
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"arnavgrg/codellama-70b-instruct-nf4-fp16-upscaled",
device_map="auto",
torch_dtype=torch.float16,
)
``` |
ChrisWilson011016/5ED9EcFBnuBdA5J9iXDc7JAS1FnN4CVSSG6gDJiNwyFdotm_vgg | ChrisWilson011016 | 2024-03-04T18:56:08Z | 555 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-24T15:21:54Z | Entry not found |
minatolegend0305/5Ekf1rJGHCfiMqeX3VrYy9oBDk5DAdHh5C1i3n4Zn6CFfNT3_vgg | minatolegend0305 | 2024-03-12T13:11:34Z | 555 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-03-04T19:45:21Z | Entry not found |
operator-technology/op-phi-1.5-gguf | operator-technology | 2024-04-02T18:51:11Z | 555 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-04-02T13:01:00Z | Entry not found |
mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF | mradermacher | 2024-05-05T14:54:57Z | 555 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:tdrussell/Llama-3-70B-Instruct-Storywriter",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-01T23:58:22Z | ---
base_model: tdrussell/Llama-3-70B-Instruct-Storywriter
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/tdrussell/Llama-3-70B-Instruct-Storywriter
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70B-Instruct-Storywriter-GGUF/resolve/main/Llama-3-70B-Instruct-Storywriter.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
yzhuang/TinyLlama-1.1B-Chat-v0.1_fictional_arc_challenge_Korean_v1 | yzhuang | 2024-05-08T09:33:49Z | 555 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-08T09:09:50Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.1
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: TinyLlama-1.1B-Chat-v0.1_fictional_arc_challenge_Korean_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyLlama-1.1B-Chat-v0.1_fictional_arc_challenge_Korean_v1
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.1](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.1) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 36
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.0a0+32f93b1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ukung/Gemma-2B-INDO-V2-GGUF | ukung | 2024-05-25T09:42:33Z | 555 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-25T09:11:07Z | Entry not found |
RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf | RichardErkhov | 2024-05-27T11:48:03Z | 555 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-27T06:24:57Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
h2ogpt-16k-codellama-7b-instruct - GGUF
- Model creator: https://huggingface.co/h2oai/
- Original model: https://huggingface.co/h2oai/h2ogpt-16k-codellama-7b-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [h2ogpt-16k-codellama-7b-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q2_K.gguf) | Q2_K | 2.36GB |
| [h2ogpt-16k-codellama-7b-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [h2ogpt-16k-codellama-7b-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [h2ogpt-16k-codellama-7b-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [h2ogpt-16k-codellama-7b-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [h2ogpt-16k-codellama-7b-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q3_K.gguf) | Q3_K | 3.07GB |
| [h2ogpt-16k-codellama-7b-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [h2ogpt-16k-codellama-7b-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [h2ogpt-16k-codellama-7b-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [h2ogpt-16k-codellama-7b-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q4_0.gguf) | Q4_0 | 3.56GB |
| [h2ogpt-16k-codellama-7b-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [h2ogpt-16k-codellama-7b-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q4_K_S.gguf) | Q4_K_S | 2.79GB |
| [h2ogpt-16k-codellama-7b-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q4_K.gguf) | Q4_K | 3.8GB |
| [h2ogpt-16k-codellama-7b-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [h2ogpt-16k-codellama-7b-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q4_1.gguf) | Q4_1 | 3.95GB |
| [h2ogpt-16k-codellama-7b-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q5_0.gguf) | Q5_0 | 4.33GB |
| [h2ogpt-16k-codellama-7b-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [h2ogpt-16k-codellama-7b-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q5_K.gguf) | Q5_K | 4.45GB |
| [h2ogpt-16k-codellama-7b-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [h2ogpt-16k-codellama-7b-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q5_1.gguf) | Q5_1 | 4.72GB |
| [h2ogpt-16k-codellama-7b-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q6_K.gguf) | Q6_K | 5.15GB |
| [h2ogpt-16k-codellama-7b-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/h2oai_-_h2ogpt-16k-codellama-7b-instruct-gguf/blob/main/h2ogpt-16k-codellama-7b-instruct.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
license: llama2
---
|
MY555/tinyllama-colorist-v0 | MY555 | 2024-05-30T13:36:37Z | 555 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-30T13:10:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
acen20/Meta-Llama-3-8B-Q2_K-GGUF | acen20 | 2024-06-22T21:06:06Z | 555 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
]
| text-generation | 2024-06-22T21:05:51Z | ---
base_model: meta-llama/Meta-Llama-3-8B
language:
- en
license: llama3
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta’s intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display “Built with Meta Llama 3” on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include “Llama 3” at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a “Notice” text file distributed as a part of such copies: “Meta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licensee’s affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# acen20/Meta-Llama-3-8B-Q2_K-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B`](https://huggingface.co/meta-llama/Meta-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo acen20/Meta-Llama-3-8B-Q2_K-GGUF --hf-file meta-llama-3-8b-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo acen20/Meta-Llama-3-8B-Q2_K-GGUF --hf-file meta-llama-3-8b-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo acen20/Meta-Llama-3-8B-Q2_K-GGUF --hf-file meta-llama-3-8b-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo acen20/Meta-Llama-3-8B-Q2_K-GGUF --hf-file meta-llama-3-8b-q2_k.gguf -c 2048
```
|
keremberke/yolov5s-valorant | keremberke | 2022-12-30T20:49:49Z | 554 | 3 | yolov5 | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/valorant-object-detection",
"model-index",
"region:us"
]
| object-detection | 2022-12-28T09:47:44Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/valorant-object-detection
model-index:
- name: keremberke/yolov5s-valorant
results:
- task:
type: object-detection
dataset:
type: keremberke/valorant-object-detection
name: keremberke/valorant-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.9818427788145484 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5s-valorant" src="https://huggingface.co/keremberke/yolov5s-valorant/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5s-valorant')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5s-valorant --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)** |
AIARTCHAN/AbyssHellHero | AIARTCHAN | 2023-03-13T08:44:01Z | 554 | 15 | diffusers | [
"diffusers",
"stable-diffusion",
"aiartchan",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-19T10:23:00Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- aiartchan
---
# AbyssHellHero
[원본글](https://arca.live/b/aiart/70124182)
[huggingface](https://huggingface.co/KMAZ/AbyssHell-AbyssMaple)
# Download
- [Original 7.7GB](https://huggingface.co/KMAZ/TestSamples/resolve/main/AbyssHellHero.ckpt)
- [safetensors 4.27GB](https://huggingface.co/AIARTCHAN/AbyssHellHero/resolve/main/AbyssHellHero-no-ema.safetensors)
- [safetensors fp16 2.13GB](https://huggingface.co/AIARTCHAN/AbyssHellHero/resolve/main/AbyssHellHero-fp16.safetensors)
AbyssOrangeMix2 + Helltaker 0.27 + HeroAcademia 0.2 레시피로 모델에 LoRA를 직접 병합한 모델. 모델 이름도 그냥 대충 앞글자만 따와서 조합함.
[EasyNegative](https://huggingface.co/datasets/gsdf/EasyNegative) 같은 부정 임베딩과 함께 사용하는 것 추천. 태그에 1.1이상 강조두는 것 추천.




|
timm/focalnet_small_lrf.ms_in1k | timm | 2024-02-10T23:32:01Z | 554 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2203.11926",
"license:mit",
"region:us"
]
| image-classification | 2023-03-18T04:34:45Z | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for focalnet_small_lrf.ms_in1k
A FocalNet image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 50.3
- GMACs: 8.7
- Activations (M): 28.6
- Image size: 224 x 224
- **Papers:**
- Focal Modulation Networks: https://arxiv.org/abs/2203.11926
- **Original:** https://github.com/microsoft/FocalNet
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('focalnet_small_lrf.ms_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'focalnet_small_lrf.ms_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for focalnet_base_srf:
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'focalnet_small_lrf.ms_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor)
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{yang2022focal,
title={Focal Modulation Networks},
author={Jianwei Yang and Chunyuan Li and Xiyang Dai and Jianfeng Gao},
journal={Advances in Neural Information Processing Systems (NeurIPS)},
year={2022}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
DeepFloyd/IF-I-L-v1.0 | DeepFloyd | 2023-06-02T19:04:26Z | 554 | 15 | diffusers | [
"diffusers",
"pytorch",
"safetensors",
"if",
"text-to-image",
"arxiv:2205.11487",
"arxiv:2110.02861",
"license:deepfloyd-if-license",
"diffusers:IFPipeline",
"region:us"
]
| text-to-image | 2023-03-21T19:01:41Z | ---
license: deepfloyd-if-license
extra_gated_prompt: "DeepFloyd LICENSE AGREEMENT\nThis License Agreement (as may be amended in accordance with this License Agreement, “License”), between you, or your employer or other entity (if you are entering into this agreement on behalf of your employer or other entity) (“Licensee” or “you”) and Stability AI Ltd.. (“Stability AI” or “we”) applies to your use of any computer program, algorithm, source code, object code, or software that is made available by Stability AI under this License (“Software”) and any specifications, manuals, documentation, and other written information provided by Stability AI related to the Software (“Documentation”).\nBy clicking “I Accept” below or by using the Software, you agree to the terms of this License. If you do not agree to this License, then you do not have any rights to use the Software or Documentation (collectively, the “Software Products”), and you must immediately cease using the Software Products. If you are agreeing to be bound by the terms of this License on behalf of your employer or other entity, you represent and warrant to Stability AI that you have full legal authority to bind your employer or such entity to this License. If you do not have the requisite authority, you may not accept the License or access the Software Products on behalf of your employer or other entity.\n1. LICENSE GRANT\n a. Subject to your compliance with the Documentation and Sections 2, 3, and 5, Stability AI grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license under Stability AI’s copyright interests to reproduce, distribute, and create derivative works of the Software solely for your non-commercial research purposes. The foregoing license is personal to you, and you may not assign or sublicense this License or any other rights or obligations under this License without Stability AI’s prior written consent; any such assignment or sublicense will be void and will automatically and immediately terminate this License.\n b. You may make a reasonable number of copies of the Documentation solely for use in connection with the license to the Software granted above.\n c. The grant of rights expressly set forth in this Section 1 (License Grant) are the complete grant of rights to you in the Software Products, and no other licenses are granted, whether by waiver, estoppel, implication, equity or otherwise. Stability AI and its licensors reserve all rights not expressly granted by this License.\L\n2. RESTRICTIONS\n You will not, and will not permit, assist or cause any third party to:\n a. use, modify, copy, reproduce, create derivative works of, or distribute the Software Products (or any derivative works thereof, works incorporating the Software Products, or any data produced by the Software), in whole or in part, for (i) any commercial or production purposes, (ii) military purposes or in the service of nuclear technology, (iii) purposes of surveillance, including any research or development relating to surveillance, (iv) biometric processing, (v) in any manner that infringes, misappropriates, or otherwise violates any third-party rights, or (vi) in any manner that violates any applicable law and violating any privacy or security laws, rules, regulations, directives, or governmental requirements (including the General Data Privacy Regulation (Regulation (EU) 2016/679), the California Consumer Privacy Act, and any and all laws governing the processing of biometric information), as well as all amendments and successor laws to any of the foregoing;\n b. alter or remove copyright and other proprietary notices which appear on or in the Software Products;\n c. utilize any equipment, device, software, or other means to circumvent or remove any security or protection used by Stability AI in connection with the Software, or to circumvent or remove any usage restrictions, or to enable functionality disabled by Stability AI; or\n d. offer or impose any terms on the Software Products that alter, restrict, or are inconsistent with the terms of this License.\n e. 1) violate any applicable U.S. and non-U.S. export control and trade sanctions laws (“Export Laws”); 2) directly or indirectly export, re-export, provide, or otherwise transfer Software Products: (a) to any individual, entity, or country prohibited by Export Laws; (b) to anyone on U.S. or non-U.S. government restricted parties lists; or (c) for any purpose prohibited by Export Laws, including nuclear, chemical or biological weapons, or missile technology applications; 3) use or download Software Products if you or they are: (a) located in a comprehensively sanctioned jurisdiction, (b) currently listed on any U.S. or non-U.S. restricted parties list, or (c) for any purpose prohibited by Export Laws; and (4) will not disguise your location through IP proxying or other methods.\L\n3. ATTRIBUTION\n Together with any copies of the Software Products (as well as derivative works thereof or works incorporating the Software Products) that you distribute, you must provide (i) a copy of this License, and (ii) the following attribution notice: “DeepFloyd is licensed under the DeepFloyd License, Copyright (c) Stability AI Ltd. All Rights Reserved.”\L\n4. DISCLAIMERS\n THE SOFTWARE PRODUCTS ARE PROVIDED “AS IS” and “WITH ALL FAULTS” WITH NO WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. STABILITY AIEXPRESSLY DISCLAIMS ALL REPRESENTATIONS AND WARRANTIES, EXPRESS OR IMPLIED, WHETHER BY STATUTE, CUSTOM, USAGE OR OTHERWISE AS TO ANY MATTERS RELATED TO THE SOFTWARE PRODUCTS, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, SATISFACTORY QUALITY, OR NON-INFRINGEMENT. STABILITY AI MAKES NO WARRANTIES OR REPRESENTATIONS THAT THE SOFTWARE PRODUCTS WILL BE ERROR FREE OR FREE OF VIRUSES OR OTHER HARMFUL COMPONENTS, OR PRODUCE ANY PARTICULAR RESULTS.\L\n5. LIMITATION OF LIABILITY\n TO THE FULLEST EXTENT PERMITTED BY LAW, IN NO EVENT WILL STABILITY AI BE LIABLE TO YOU (A) UNDER ANY THEORY OF LIABILITY, WHETHER BASED IN CONTRACT, TORT, NEGLIGENCE, STRICT LIABILITY, WARRANTY, OR OTHERWISE UNDER THIS LICENSE, OR (B) FOR ANY INDIRECT, CONSEQUENTIAL, EXEMPLARY, INCIDENTAL, PUNITIVE OR SPECIAL DAMAGES OR LOST PROFITS, EVEN IF STABILITY AI HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THE SOFTWARE PRODUCTS, THEIR CONSTITUENT COMPONENTS, AND ANY OUTPUT (COLLECTIVELY, “SOFTWARE MATERIALS”) ARE NOT DESIGNED OR INTENDED FOR USE IN ANY APPLICATION OR SITUATION WHERE FAILURE OR FAULT OF THE SOFTWARE MATERIALS COULD REASONABLY BE ANTICIPATED TO LEAD TO SERIOUS INJURY OF ANY PERSON, INCLUDING POTENTIAL DISCRIMINATION OR VIOLATION OF AN INDIVIDUAL’S PRIVACY RIGHTS, OR TO SEVERE PHYSICAL, PROPERTY, OR ENVIRONMENTAL DAMAGE (EACH, A “HIGH-RISK USE”). IF YOU ELECT TO USE ANY OF THE SOFTWARE MATERIALS FOR A HIGH-RISK USE, YOU DO SO AT YOUR OWN RISK. YOU AGREE TO DESIGN AND IMPLEMENT APPROPRIATE DECISION-MAKING AND RISK-MITIGATION PROCEDURES AND POLICIES IN CONNECTION WITH A HIGH-RISK USE SUCH THAT EVEN IF THERE IS A FAILURE OR FAULT IN ANY OF THE SOFTWARE MATERIALS, THE SAFETY OF PERSONS OR PROPERTY AFFECTED BY THE ACTIVITY STAYS AT A LEVEL THAT IS REASONABLE, APPROPRIATE, AND LAWFUL FOR THE FIELD OF THE HIGH-RISK USE.\L\n6. INDEMNIFICATION\n You will indemnify, defend and hold harmless Stability AI and our subsidiaries and affiliates, and each of our respective shareholders, directors, officers, employees, agents, successors, and assigns (collectively, the “Stability AI Parties”) from and against any losses, liabilities, damages, fines, penalties, and expenses (including reasonable attorneys’ fees) incurred by any Stability AI Party in connection with any claim, demand, allegation, lawsuit, proceeding, or investigation (collectively, “Claims”) arising out of or related to: (a) your access to or use of the Software Products (as well as any results or data generated from such access or use), including any High-Risk Use (defined below); (b) your violation of this License; or (c) your violation, misappropriation or infringement of any rights of another (including intellectual property or other proprietary rights and privacy rights). You will promptly notify the Stability AI Parties of any such Claims, and cooperate with Stability AI Parties in defending such Claims. You will also grant the Stability AI Parties sole control of the defense or settlement, at Stability AI’s sole option, of any Claims. This indemnity is in addition to, and not in lieu of, any other indemnities or remedies set forth in a written agreement between you and Stability AI or the other Stability AI Parties.\L\n7. TERMINATION; SURVIVAL\n a. This License will automatically terminate upon any breach by you of the terms of this License.\L\Lb. We may terminate this License, in whole or in part, at any time upon notice (including electronic) to you.\L\Lc. The following sections survive termination of this License: 2 (Restrictions), 3 (Attribution), 4 (Disclaimers), 5 (Limitation on Liability), 6 (Indemnification) 7 (Termination; Survival), 8 (Third Party Materials), 9 (Trademarks), 10 (Applicable Law; Dispute Resolution), and 11 (Miscellaneous).\L\n8. THIRD PARTY MATERIALS\n The Software Products may contain third-party software or other components (including free and open source software) (all of the foregoing, “Third Party Materials”), which are subject to the license terms of the respective third-party licensors. Your dealings or correspondence with third parties and your use of or interaction with any Third Party Materials are solely between you and the third party. Stability AI does not control or endorse, and makes no representations or warranties regarding, any Third Party Materials, and your access to and use of such Third Party Materials are at your own risk.\L\n9. TRADEMARKS\n Licensee has not been granted any trademark license as part of this License and may not use any name or mark associated with Stability AI without the prior written permission of Stability AI, except to the extent necessary to make the reference required by the “ATTRIBUTION” section of this Agreement.\L\n10. APPLICABLE LAW; DISPUTE RESOLUTION\n This License will be governed and construed under the laws of the State of California without regard to conflicts of law provisions. Any suit or proceeding arising out of or relating to this License will be brought in the federal or state courts, as applicable, in San Mateo County, California, and each party irrevocably submits to the jurisdiction and venue of such courts.\L\n11. MISCELLANEOUS\n If any provision or part of a provision of this License is unlawful, void or unenforceable, that provision or part of the provision is deemed severed from this License, and will not affect the validity and enforceability of any remaining provisions. The failure of Stability AI to exercise or enforce any right or provision of this License will not operate as a waiver of such right or provision. This License does not confer any third-party beneficiary rights upon any other person or entity. This License, together with the Documentation, contains the entire understanding between you and Stability AI regarding the subject matter of this License, and supersedes all other written or oral agreements and understandings between you and Stability AI regarding such subject matter. No change or addition to any provision of this License will be binding unless it is in writing and signed by an authorized representative of both you and Stability AI."
extra_gated_fields:
"Organization /\_Affiliation": text
Previously related publications: text
I accept the above license agreement, and will use the Software non-commercially and for research purposes only: checkbox
tags:
- if
- text-to-image
inference: false
---
# IF-I-L-v1.0
DeepFloyd-IF is a pixel-based text-to-image triple-cascaded diffusion model, that can generate pictures with new state-of-the-art for photorealism and language understanding. The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID-30K score of `6.66` on the COCO dataset.
*Inspired by* [*Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding*](https://arxiv.org/pdf/2205.11487.pdf)

## Model Details
- **Developed by:** DeepFloyd, StabilityAI
- **Model type:** pixel-based text-to-image cascaded diffusion model
- **Cascade Stage:** I
- **Num Parameters:** 900M
- **Language(s):** primarily English and, to a lesser extent, other Romance languages
- **License:** <span style="color:blue"><a href="https://huggingface.co/spaces/DeepFloyd/deepfloyd-if-license">DeepFloyd IF License Agreement</a></span>
- **Model Description:** DeepFloyd-IF is modular composed of frozen text mode and three pixel cascaded diffusion modules, each designed to generate images of increasing resolution: 64x64, 256x256, and 1024x1024. All stages of the model utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention-pooling
- **Resources for more information:** [GitHub](https://github.com/deep-floyd/IF), [Website](https://deepfloyd.ai), [All Links](https://linktr.ee/deepfloyd)
## Using with `diffusers`
IF is integrated with the 🤗 Hugging Face [🧨 diffusers library](https://github.com/huggingface/diffusers/), which is optimized to run on GPUs with as little as 14 GB of VRAM.
Before you can use IF, you need to accept its usage conditions. To do so:
1. Make sure to have a [Hugging Face account](https://huggingface.co/join) and be loggin in
2. Accept the license on the model card of [DeepFloyd/IF-I-L-v1.0](https://huggingface.co/DeepFloyd/IF-I-L-v1.0)
3. Make sure to login locally. Install `huggingface_hub`
```sh
pip install huggingface_hub --upgrade
```
run the login function in a Python shell
```py
from huggingface_hub import login
login()
```
and enter your [Hugging Face Hub access token](https://huggingface.co/docs/hub/security-tokens#what-are-user-access-tokens).
Next we install `diffusers` and dependencies:
```sh
pip install diffusers accelerate transformers safetensors sentencepiece
```
And we can now run the model locally.
By default `diffusers` makes use of [model cpu offloading](https://huggingface.co/docs/diffusers/optimization/fp16#model-offloading-for-fast-inference-and-memory-savings) to run the whole IF pipeline with as little as 14 GB of VRAM.
If you are using `torch>=2.0.0`, make sure to **remove all** `enable_xformers_memory_efficient_attention()` functions.
* **Load all stages and offload to CPU**
```py
from diffusers import DiffusionPipeline
from diffusers.utils import pt_to_pil
import torch
# stage 1
stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-L-v1.0", variant="fp16", torch_dtype=torch.float16)
stage_1.enable_xformers_memory_efficient_attention() # remove line if torch.__version__ >= 2.0.0
stage_1.enable_model_cpu_offload()
# stage 2
stage_2 = DiffusionPipeline.from_pretrained(
"DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
)
stage_2.enable_xformers_memory_efficient_attention() # remove line if torch.__version__ >= 2.0.0
stage_2.enable_model_cpu_offload()
# stage 3
safety_modules = {"feature_extractor": stage_1.feature_extractor, "safety_checker": stage_1.safety_checker, "watermarker": stage_1.watermarker}
stage_3 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16)
stage_3.enable_xformers_memory_efficient_attention() # remove line if torch.__version__ >= 2.0.0
stage_3.enable_model_cpu_offload()
```
* **Retrieve Text Embeddings**
```py
prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
# text embeds
prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt)
```
* **Run stage 1**
```py
generator = torch.manual_seed(0)
image = stage_1(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt").images
pt_to_pil(image)[0].save("./if_stage_I.png")
```
* **Run stage 2**
```py
image = stage_2(
image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt"
).images
pt_to_pil(image)[0].save("./if_stage_II.png")
```
* **Run stage 3**
```py
image = stage_3(prompt=prompt, image=image, generator=generator, noise_level=100).images
image[0].save("./if_stage_III.png")
```
There are multiple ways to speed up the inference time and lower the memory consumption even more with `diffusers`. To do so, please have a look at the Diffusers docs:
- 🚀 [Optimizing for inference time](https://huggingface.co/docs/diffusers/api/pipelines/if#optimizing-for-speed)
- ⚙️ [Optimizing for low memory during inference](https://huggingface.co/docs/diffusers/api/pipelines/if#optimizing-for-memory)
For more in-detail information about how to use IF, please have a look at [the IF blog post](https://huggingface.co/blog/if) and the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/if) 📖.
Diffusers dreambooth scripts also supports fine-tuning 🎨 [IF](https://huggingface.co/docs/diffusers/main/en/training/dreambooth#if).
With parameter efficient finetuning, you can add new concepts to IF with a single GPU and ~28 GB VRAM.
## Training
**Training Data:**
1.2B text-image pairs (based on LAION-A and few additional internal datasets)
Test/Valid parts of datasets are not used at any cascade and stage of training. Valid part of COCO helps to demonstrate "online" loss behaviour during training (to catch incident and other problems), but dataset is never used for train.
**Training Procedure:** IF-I-L-v1.0 is pixel-based diffusion cascade which uses T5-Encoder embeddings (hidden states) to generate 64px image. During training,
- Images are cropped to square via shifted-center-crop augmentation (randomly shift from center up to 0.1 of size) and resized to 64px using `Pillow==9.2.0` BICUBIC resampling with reducing_gap=None (it helps to avoid aliasing) and processed to tensor BxCxHxW
- Text prompts are encoded through open-sourced frozen T5-v1_1-xxl text-encoder (that completely was trained by Google team), random 10% of texts are dropped to empty string to add ability for classifier free guidance (CFG)
- The non-pooled output of the text encoder is fed into the projection (linear layer without activation) and is used in UNet backbone of the diffusion model via controlled hybrid self- and cross- attention
- Also, the output of the text encode is pooled via attention-pooling (64 heads) and is used in time embed as additional features
- Diffusion process is limited by 1000 discrete steps, with cosine beta schedule of noising image
- The loss is a reconstruction objective between the noise that was added to the image and the prediction made by the UNet
- The training process for checkpoint IF-I-L-v1.0 has 2_500_000 steps + 500_000 extra steps at resolution 64x64 on all datasets, OneCycleLR policy, few-bit backward GELU activations, optimizer AdamW8bit + DeepSpeed-Zero1, fully frozen T5-Encoder

**Hardware:** 20 x 8 x A100 GPUs
**Optimizer:** [AdamW8bit](https://arxiv.org/abs/2110.02861) + [DeepSpeed ZeRO-1](https://www.deepspeed.ai/tutorials/zero/)
**Batch:** 3200
**Learning rate**: [one-cycle](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html) cosine strategy, warmup 10000 steps, start_lr=4e-6, max_lr=1e-4, final_lr=1e-8;
_for extra 500_000 steps:_ [one-cycle](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html) cosine strategy, warmup 50_000 steps, start_lr=1e-8, max_lr=4e-6, final_lr=4e-8

## Evaluation Results
`FID-30K: 8.06`

# Uses
## Direct Use
The model is released for research purposes. Any attempt to deploy the model in production requires not only that the LICENSE is followed but full liability over the person deploying the model.
Possible research areas and tasks include:
- Generation of artistic imagery and use in design and other artistic processes.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion but applies in the same way for IF_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model was trained mainly with English captions and will not work as well in other languages.
- The model was trained on a subset of the large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have... (see Training section).
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
IF was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
IF mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
*This model card was written by: DeepFloyd Team and is based on the [StableDiffusion model card](https://huggingface.co/CompVis/stable-diffusion-v1-4).* |
Yntec/samaritan3dCartoon2MVAE | Yntec | 2023-08-07T11:14:21Z | 554 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"3D",
"art",
"style",
"checkpoint",
"PromptSharingSamaritan",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-07T10:18:54Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/0MGXGAxBpd-qDBWPYnWhR.png
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- 3D
- art
- style
- checkpoint
- PromptSharingSamaritan
- diffusers
inference: true
---
# samaritan 3d Cartoon 2
This model with the MoistMix VAE baked in.
Previews and prompt:


(lora)0.5 , (amakawa hano)0.5 , 1 girl, ray tracing, {best quality}, {{masterpiece}}, {highres}, original, extremely detailed 8K wallpaper, {an extremely delicate and beautiful}, , incredibly_absurdres, colorful, intricate detail, artbook
Original pages:
https://civitai.com/models/81270?modelVersionId=113299
https://civitai.com/api/download/models/14459?type=VAE
|
TheBloke/llama-2-7B-Arguments-GGUF | TheBloke | 2023-10-06T00:17:49Z | 554 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:cris177/llama-2-7b-Arguments",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-05T22:39:35Z | ---
base_model: cris177/llama-2-7b-Arguments
inference: false
license: llama2
model_creator: Cristian Desivo
model_name: Llama 2 7B Arguments
model_type: llama
prompt_template: '<s>[INST] {prompt}
[/INST]
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 7B Arguments - GGUF
- Model creator: [Cristian Desivo](https://huggingface.co/cris177)
- Original model: [Llama 2 7B Arguments](https://huggingface.co/cris177/llama-2-7b-Arguments)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Cristian Desivo's Llama 2 7B Arguments](https://huggingface.co/cris177/llama-2-7b-Arguments).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/llama-2-7B-Arguments-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF)
* [Cristian Desivo's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/cris177/llama-2-7b-Arguments)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Prompt
```
<s>[INST] {prompt}
[/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-2-7b-arguments.Q2_K.gguf](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF/blob/main/llama-2-7b-arguments.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-7b-arguments.Q3_K_S.gguf](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF/blob/main/llama-2-7b-arguments.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [llama-2-7b-arguments.Q3_K_M.gguf](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF/blob/main/llama-2-7b-arguments.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [llama-2-7b-arguments.Q3_K_L.gguf](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF/blob/main/llama-2-7b-arguments.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [llama-2-7b-arguments.Q4_0.gguf](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF/blob/main/llama-2-7b-arguments.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-7b-arguments.Q4_K_S.gguf](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF/blob/main/llama-2-7b-arguments.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [llama-2-7b-arguments.Q4_K_M.gguf](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF/blob/main/llama-2-7b-arguments.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [llama-2-7b-arguments.Q5_0.gguf](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF/blob/main/llama-2-7b-arguments.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-7b-arguments.Q5_K_S.gguf](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF/blob/main/llama-2-7b-arguments.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [llama-2-7b-arguments.Q5_K_M.gguf](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF/blob/main/llama-2-7b-arguments.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [llama-2-7b-arguments.Q6_K.gguf](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF/blob/main/llama-2-7b-arguments.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [llama-2-7b-arguments.Q8_0.gguf](https://huggingface.co/TheBloke/llama-2-7B-Arguments-GGUF/blob/main/llama-2-7b-arguments.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/llama-2-7B-Arguments-GGUF and below it, a specific filename to download, such as: llama-2-7b-arguments.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/llama-2-7B-Arguments-GGUF llama-2-7b-arguments.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/llama-2-7B-Arguments-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/llama-2-7B-Arguments-GGUF llama-2-7b-arguments.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-2-7b-arguments.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt}\n[/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/llama-2-7B-Arguments-GGUF", model_file="llama-2-7b-arguments.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Cristian Desivo's Llama 2 7B Arguments
This model aims to detect and analyze casual arguments.
Model template:
```
<s>[INST] {prompt}
[/INST]
```
Example:
```
`<`s`>`[INST] Analize the following argument, identifying premises, conclusion, type of argument, and argument validity:
If officer smith found a broken window at the crime scene then the arson occurred on elm street, and officer smith found a broken window at the crime scene, hence the arson occurred on elm street.
[/INST] Premise 1: If officer smith found a broken window at the crime scene then the arson occurred on elm street Premise 2: Officer smith found a broken window at the crime scene Conclusion: The arson occurred on Elm Street Type of argument: modus ponen Validity: True `<`/s`>`
```
It was trained on my dataset cris177/Arguments (https://huggingface.co/datasets/cris177/Arguments)
<!-- original-model-card end -->
|
YanweiLi/llama-vid-13b-full-224-video-fps-1 | YanweiLi | 2023-12-03T06:14:18Z | 554 | 2 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"vision-language model",
"llama",
"video understanding",
"arxiv:2311.17043",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-11-29T06:13:00Z | ---
tags:
- vision-language model
- llama
- video understanding
---
# LLaMA-VID Model Card
<a href='https://llama-vid.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
<a href='https://arxiv.org/abs/2311.17043'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
## Model details
LLaMA-VID empowers existing frameworks to support hour-long videos and pushes their upper limit with an extra context token.
**Model type:**
LLaMA-VID is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
LLaMA-VID empowers existing frameworks to support hour-long videos and pushes their upper limit with an extra context token. We build this repo based on LLaVA.
**Model date:**
llama-vid-13b-full-224-video-fps-1 was trained on 11/2023.
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/dvlab-research/LLaMA-VID/issues
## Intended use
**Primary intended uses:**
The primary use of LLaMA-VID is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training data
This model is trained based on image data from LLaVA-1.5 dataset, and video data from WebVid and ActivityNet datasets, including
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 450K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
- 232K video-caption pairs sampled from the WebVid 2.5M dataset.
- 98K videos from ActivityNet with QA pairs from Video-ChatGPT. |
Gan1108/electraForCausalLM | Gan1108 | 2024-03-27T04:12:58Z | 554 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"electra",
"text-generation",
"generated_from_trainer",
"uk",
"base_model:lang-uk/electra-base-ukrainian-cased-generator",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-03-25T08:09:41Z | ---
license: mit
base_model: lang-uk/electra-base-ukrainian-cased-generator
tags:
- generated_from_trainer
model-index:
- name: electraForCausalLM
results: []
language:
- uk
widget:
- text: "0007347852 - 734785.0 - Підшипник кульковий закритий спеціальний "
---
# electraForCausalLM
This model is a fine-tuned version of [lang-uk/electra-base-ukrainian-cased-generator](https://huggingface.co/lang-uk/electra-base-ukrainian-cased-generator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3302
## Model description
The model is trained to generate a text description in Ukrainian of spare parts for agricultural machinery based on their name.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7321 | 1.0 | 2354 | 1.5172 |
| 1.4763 | 2.0 | 4708 | 1.3670 |
| 1.4006 | 3.0 | 7062 | 1.3302 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
mradermacher/c4ai-command-r-plus-i1-GGUF | mradermacher | 2024-05-06T04:37:22Z | 554 | 1 | transformers | [
"transformers",
"gguf",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"base_model:CohereForAI/c4ai-command-r-plus",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-21T01:28:20Z | ---
base_model: CohereForAI/c4ai-command-r-plus
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/CohereForAI/c4ai-command-r-plus
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/c4ai-command-r-plus-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-IQ1_S.gguf) | i1-IQ1_S | 23.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-IQ1_M.gguf) | i1-IQ1_M | 25.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 28.7 | |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-IQ2_XS.gguf) | i1-IQ2_XS | 31.7 | |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-IQ2_S.gguf) | i1-IQ2_S | 33.4 | |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-IQ2_M.gguf) | i1-IQ2_M | 36.1 | |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q2_K.gguf) | i1-Q2_K | 39.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 40.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-IQ3_XS.gguf) | i1-IQ3_XS | 43.7 | |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q3_K_S.gguf) | i1-Q3_K_S | 46.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-IQ3_S.gguf) | i1-IQ3_S | 46.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-IQ3_M.gguf) | i1-IQ3_M | 47.8 | |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 51.1 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 55.5 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 56.3 | |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 59.5 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 59.7 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 62.9 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 71.9 | |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 73.7 | |
| [PART 1](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/c4ai-command-r-plus-i1-GGUF/resolve/main/c4ai-command-r-plus.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 85.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF | mradermacher | 2024-06-14T03:23:59Z | 554 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:Envoid/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-13T04:31:33Z | ---
base_model: Envoid/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Envoid/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B-i1-GGUF/resolve/main/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
janw23/Nxcode-CQ-7B-orpo-Q4_K_M-GGUF | janw23 | 2024-06-27T15:53:26Z | 554 | 0 | null | [
"gguf",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:NTQAI/Nxcode-CQ-7B-orpo",
"license:other",
"region:us"
]
| text-generation | 2024-06-27T15:53:06Z | ---
base_model: NTQAI/Nxcode-CQ-7B-orpo
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- code
- llama-cpp
- gguf-my-repo
---
# janw23/Nxcode-CQ-7B-orpo-Q4_K_M-GGUF
This model was converted to GGUF format from [`NTQAI/Nxcode-CQ-7B-orpo`](https://huggingface.co/NTQAI/Nxcode-CQ-7B-orpo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NTQAI/Nxcode-CQ-7B-orpo) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo janw23/Nxcode-CQ-7B-orpo-Q4_K_M-GGUF --hf-file nxcode-cq-7b-orpo-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo janw23/Nxcode-CQ-7B-orpo-Q4_K_M-GGUF --hf-file nxcode-cq-7b-orpo-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo janw23/Nxcode-CQ-7B-orpo-Q4_K_M-GGUF --hf-file nxcode-cq-7b-orpo-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo janw23/Nxcode-CQ-7B-orpo-Q4_K_M-GGUF --hf-file nxcode-cq-7b-orpo-q4_k_m.gguf -c 2048
```
|
nazimali/instruction-synthesizer-Q6_K-GGUF | nazimali | 2024-07-01T23:48:55Z | 554 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:instruction-pretrain/ft-instruction-synthesizer-collection",
"base_model:instruction-pretrain/instruction-synthesizer",
"license:apache-2.0",
"region:us"
]
| null | 2024-07-01T23:48:30Z | ---
base_model: instruction-pretrain/instruction-synthesizer
datasets:
- instruction-pretrain/ft-instruction-synthesizer-collection
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# nazimali/instruction-synthesizer-Q6_K-GGUF
This model was converted to GGUF format from [`instruction-pretrain/instruction-synthesizer`](https://huggingface.co/instruction-pretrain/instruction-synthesizer) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/instruction-pretrain/instruction-synthesizer) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nazimali/instruction-synthesizer-Q6_K-GGUF --hf-file instruction-synthesizer-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nazimali/instruction-synthesizer-Q6_K-GGUF --hf-file instruction-synthesizer-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nazimali/instruction-synthesizer-Q6_K-GGUF --hf-file instruction-synthesizer-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nazimali/instruction-synthesizer-Q6_K-GGUF --hf-file instruction-synthesizer-q6_k.gguf -c 2048
```
|
TheBloke/Genz-70b-GGUF | TheBloke | 2023-09-27T12:46:25Z | 553 | 9 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"base_model:budecosystem/genz-70b",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-08-26T16:37:13Z | ---
language:
- en
license: llama2
library_name: transformers
model_name: GenZ 70B
base_model: budecosystem/genz-70b
inference: false
model_creator: Bud
model_type: llama
pipeline_tag: text-generation
prompt_template: '### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# GenZ 70B - GGUF
- Model creator: [Bud](https://huggingface.co/budecosystem)
- Original model: [GenZ 70B](https://huggingface.co/budecosystem/genz-70b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Bud's GenZ 70B](https://huggingface.co/budecosystem/genz-70b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Genz-70b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Genz-70b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Genz-70b-GGUF)
* [Bud's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/budecosystem/genz-70b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant-Newlines
```
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [genz-70b.Q2_K.gguf](https://huggingface.co/TheBloke/Genz-70b-GGUF/blob/main/genz-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [genz-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Genz-70b-GGUF/blob/main/genz-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [genz-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Genz-70b-GGUF/blob/main/genz-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [genz-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Genz-70b-GGUF/blob/main/genz-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [genz-70b.Q4_0.gguf](https://huggingface.co/TheBloke/Genz-70b-GGUF/blob/main/genz-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [genz-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Genz-70b-GGUF/blob/main/genz-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [genz-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Genz-70b-GGUF/blob/main/genz-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [genz-70b.Q5_0.gguf](https://huggingface.co/TheBloke/Genz-70b-GGUF/blob/main/genz-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [genz-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Genz-70b-GGUF/blob/main/genz-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [genz-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Genz-70b-GGUF/blob/main/genz-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| genz-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| genz-70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `genz-70b.Q6_K.gguf-split-a`
* `genz-70b.Q6_K.gguf-split-b`
### q8_0
Please download:
* `genz-70b.Q8_0.gguf-split-a`
* `genz-70b.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat genz-70b.Q6_K.gguf-split-* > genz-70b.Q6_K.gguf && rm genz-70b.Q6_K.gguf-split-*
cat genz-70b.Q8_0.gguf-split-* > genz-70b.Q8_0.gguf && rm genz-70b.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B genz-70b.Q6_K.gguf-split-a + genz-70b.Q6_K.gguf-split-b genz-70b.Q6_K.gguf
del genz-70b.Q6_K.gguf-split-a genz-70b.Q6_K.gguf-split-b
COPY /B genz-70b.Q8_0.gguf-split-a + genz-70b.Q8_0.gguf-split-b genz-70b.Q8_0.gguf
del genz-70b.Q8_0.gguf-split-a genz-70b.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Genz-70b-GGUF and below it, a specific filename to download, such as: genz-70b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Genz-70b-GGUF genz-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Genz-70b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Genz-70b-GGUF genz-70b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m genz-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### User:\n{prompt}\n\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Genz-70b-GGUF", model_file="genz-70b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Bud's GenZ 70B
---
<div align="center"><h1 align="center">~ GenZ ~</h1><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/genz-logo.png" width=150></div>
<p align="center"><i>Democratizing access to LLMs for the open-source community.<br>Let's advance AI, together. </i></p>
---
## Introduction 🎉
Welcome to **GenZ**, an advanced Large Language Model (LLM) fine-tuned on the foundation of Meta's open-source Llama V2 70B parameter model. At Bud Ecosystem, we believe in the power of open-source collaboration to drive the advancement of technology at an accelerated pace. Our vision is to democratize access to fine-tuned LLMs, and to that end, we will be releasing a series of models across different parameter counts (7B, 13B, and 70B) and quantizations (32-bit and 4-bit) for the open-source community to use, enhance, and build upon.
<p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/mt_bench_compare.png" width="500"></p>
The smaller quantization version of our models makes them more accessible, enabling their use even on personal computers. This opens up a world of possibilities for developers, researchers, and enthusiasts to experiment with these models and contribute to the collective advancement of language model technology.
GenZ isn't just a powerful text generator—it's a sophisticated AI assistant, capable of understanding and responding to user prompts with high-quality responses. We've taken the robust capabilities of Llama V2 and fine-tuned them to offer a more user-focused experience. Whether you're seeking informative responses or engaging interactions, GenZ is designed to deliver.
And this isn't the end. It's just the beginning of a journey towards creating more advanced, more efficient, and more accessible language models. We invite you to join us on this exciting journey. 🚀
---
<h2>Milestone Releases ️🏁</h2>
**[21 August 2023]**
[_GenZ-70B_](https://huggingface.co/budecosystem/genz-70b) : We're excited to announce the release of our Genz 70BB model. Experience the advancements by downloading the model from [HuggingFace](https://huggingface.co/budecosystem/genz-70b).
**[27 July 2023]**
[_GenZ-13B V2 (ggml)_](https://huggingface.co/budecosystem/genz-13b-v2-ggml) : Announcing our GenZ-13B v2 with ggml. This variant of GenZ can run inferencing using only CPU and without the need of GPU. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-ggml).
**[27 July 2023]**
[_GenZ-13B V2 (4-bit)_](https://huggingface.co/budecosystem/genz-13b-v2-4bit) : Announcing our GenZ-13B v2 with 4-bit quantisation. Enabling inferencing with much lesser GPU memory than the 32-bit variant. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-4bit).
**[26 July 2023]**
[_GenZ-13B V2_](https://huggingface.co/budecosystem/genz-13b-v2) : We're excited to announce the release of our Genz 13B v2 model, a step forward with improved evaluation results compared to v1. Experience the advancements by downloading the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2).
**[20 July 2023]**
[_GenZ-13B_](https://huggingface.co/budecosystem/genz-13b) : We marked an important milestone with the release of the Genz 13B model. The journey began here, and you can partake in it by downloading the model from [Hugging Face](https://huggingface.co/budecosystem/genz-13b).
---
<h2>Evaluations 🎯</h2>
Evaluating our model is a key part of our fine-tuning process. It helps us understand how our model is performing and how it stacks up against other models. Here's a look at some of the key evaluations for GenZ 70B:
<h3>Benchmark Comparison</h3>
We've compared GenZ models to understand the improvements our fine-tuning has achieved.
| Model Name | MT Bench | MMLU | Human Eval | BBH |
|:----------:|:--------:|:----:|:----------:|:----:|
| Genz 13B | 6.12 | 53.62| 17.68 | 37.76|
| Genz 13B v2| 6.79 | 53.68| 21.95 | 38.1 |
| Genz 70B | 7.33 | 70.32| 37.8 |54.69 |
<h3>MT Bench Score</h3>
A key evaluation metric we use is the MT Bench score. This score provides a comprehensive assessment of our model's performance across a range of tasks.
<p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/mt_bench_score.png" width="500"></p>
---
<h2>Getting Started on Hugging Face 🤗</h2>
Getting up and running with our models on Hugging Face is a breeze. Follow these steps:
<h3>1️⃣ : Import necessary modules</h3>
Start by importing the necessary modules from the ‘transformers’ library and ‘torch’.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/genz-70b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("budecosystem/genz-70b", torch_dtype=torch.bfloat16, rope_scaling={"type": "dynamic", "factor": 2})
prompt = "### User:\nWrite a python flask code for login management\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
Want to interact with the model in a more intuitive way? We have a Gradio interface set up for that. Head over to our GitHub page, clone the repository, and run the ‘generate.py’ script to try it out. Happy experimenting! 😄
<h2>Why Use GenZ? 💡</h2>
You might be wondering, "Why should I choose GenZ over a pretrained model?" The answer lies in the extra mile we've gone to fine-tune our models.
While pretrained models are undeniably powerful, GenZ brings something extra to the table. We've fine-tuned it with curated datasets, which means it has additional skills and capabilities beyond what a pretrained model can offer. Whether you need it for a simple task or a complex project, GenZ is up for the challenge.
What's more, we are committed to continuously enhancing GenZ. We believe in the power of constant learning and improvement. That's why we'll be regularly fine-tuning our models with various curated datasets to make them even better. Our goal is to reach the state of the art and beyond - and we're committed to staying the course until we get there.
But don't just take our word for it. We've provided detailed evaluations and performance details in a later section, so you can see the difference for yourself.
Choose GenZ and join us on this journey. Together, we can push the boundaries of what's possible with large language models.
---
<h2>Model Card for GenZ 70B 📄</h2>
Here's a quick overview of everything you need to know about GenZ 70B.
<h3>Model Details:</h3>
- Developed by: Bud Ecosystem
- Base pretrained model type: Llama V2 70B
- Model Architecture: GenZ 70B, fine-tuned on Llama V2 70B, is an auto-regressive language model that employs an optimized transformer architecture. The fine-tuning process for GenZ 70B leveraged Supervised Fine-Tuning (SFT)
- License: The model is available for commercial use under a custom commercial license. For more information, please visit: [Meta AI Model and Library Downloads](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
---
<h2>Intended Use 💼</h2>
When we created GenZ 70B, we had a clear vision of how it could be used to push the boundaries of what's possible with large language models. We also understand the importance of using such models responsibly. Here's a brief overview of the intended and out-of-scope uses for GenZ 70B.
<h3>Direct Use</h3>
GenZ 70B is designed to be a powerful tool for research on large language models. It's also an excellent foundation for further specialization and fine-tuning for specific use cases, such as:
- Text summarization
- Text generation
- Chatbot creation
- And much more!
<h3>Out-of-Scope Use 🚩</h3>
While GenZ 70B is versatile, there are certain uses that are out of scope:
- Production use without adequate assessment of risks and mitigation
- Any use cases which may be considered irresponsible or harmful
- Use in any manner that violates applicable laws or regulations, including trade compliance laws
- Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2
Remember, GenZ 70B, like any large language model, is trained on a large-scale corpora representative of the web, and therefore, may carry the stereotypes and biases commonly encountered online.
<h3>Recommendations 🧠</h3>
We recommend users of GenZ 70B to consider fine-tuning it for the specific set of tasks of interest. Appropriate precautions and guardrails should be taken for any production use. Using GenZ 70B responsibly is key to unlocking its full potential while maintaining a safe and respectful environment.
---
<h2>Training Details 📚</h2>
When fine-tuning GenZ 70B, we took a meticulous approach to ensure we were building on the solid base of the pretrained Llama V2 70B model in the most effective way. Here's a look at the key details of our training process:
<h3>Fine-Tuning Training Data</h3>
For the fine-tuning process, we used a carefully curated mix of datasets. These included data from OpenAssistant, an instruction fine-tuning dataset, and Thought Source for the Chain Of Thought (CoT) approach. This diverse mix of data sources helped us enhance the model's capabilities across a range of tasks.
<h3>Hyperparameters</h3>
Here are the hyperparameters we used for fine-tuning:
| Hyperparameter | Value |
| -------------- | ----- |
| Warmup Ratio | 0.04 |
| Learning Rate Scheduler Type | Cosine |
| Learning Rate | 2e-5 |
| Number of Training Epochs | 3 |
| Per Device Training Batch Size | 4 |
| Gradient Accumulation Steps | 4 |
| Precision | FP16 |
| Optimizer | AdamW |
---
<h2>Looking Ahead 👀</h2>
We're excited about the journey ahead with GenZ. We're committed to continuously improving and enhancing our models, and we're excited to see what the open-source community will build with them. We believe in the power of collaboration, and we can't wait to see what we can achieve together.
Remember, we're just getting started. This is just the beginning of a journey that we believe will revolutionize the world of large language models. We invite you to join us on this exciting journey. Together, we can push the boundaries of what's possible with AI. 🚀
---
Check the GitHub for the code -> [GenZ](https://raw.githubusercontent.com/BudEcosystem/GenZ)
<!-- original-model-card end -->
|
maddes8cht/gorilla-llm-gorilla-mpt-7b-hf-v0-gguf | maddes8cht | 2023-11-22T20:26:15Z | 553 | 1 | null | [
"gguf",
"api",
"en",
"dataset:gorilla-llm/APIBench",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-03T16:04:26Z | ---
license: apache-2.0
language:
- en
tags:
- api
datasets:
- gorilla-llm/APIBench
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# gorilla-mpt-7b-hf-v0 - GGUF
- Model creator: [gorilla-llm](https://huggingface.co/gorilla-llm)
- Original model: [gorilla-mpt-7b-hf-v0](https://huggingface.co/gorilla-llm/gorilla-mpt-7b-hf-v0)
MPT-7b and MPT-30B are part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
license: apache-2.0
---
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
guishe/span-marker-generic-ner-v1-fewnerd-fine-super | guishe | 2024-03-28T20:46:44Z | 553 | 13 | span-marker | [
"span-marker",
"pytorch",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"en",
"dataset:DFKI-SLT/few-nerd",
"base_model:numind/generic-entity_recognition_NER-v1",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
]
| token-classification | 2023-11-09T18:44:28Z | ---
language: en
license: cc-by-sa-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
datasets:
- DFKI-SLT/few-nerd
metrics:
- precision
- recall
- f1
widget:
- text: Concern and scepticism surround Niger uranium mining waste storage plans. Towering mounds dot the desert landscape in northern Niger's Arlit region, but they are heaps of partially radioactive waste left from four decades of operations at one of the world's biggest uranium mines. An ambitious 10-year scheme costing $160 million is underway to secure the waste and avoid risks to health and the environment, but many local people are worried or sceptical. France's nuclear giant Areva, now called Orano, worked the area under a subsidiary, the Akouta Mining Company (Cominak). Cominak closed the site in 2021 after extracting 75,000 tonnes of uranium, much of which went to fuelling the scores of nuclear reactors that provide the backbone of France's electricity supply. Cominak's director general Mahaman Sani Abdoulaye showcased the rehabilitation project to the first French journalists to visit the site since 2010, when seven Areva employees were kidnapped by jihadists.
- text: SE Michigan counties allege insulin gouging; Localities file lawsuit against pharmaceutical makers. Four metro Detroit counties filed federal lawsuits Wednesday against some of the nation's biggest pharmaceutical manufacturers and pharmacy benefit managers alleging illegal price fixing for insulin products. Macomb, Monroe, Wayne and Washtenaw counties filed the lawsuits in U.S. District Court in New Jersey against more than a dozen companies, including Lilly, Sanofi Aventis, Novo Nordisk, Express Scripts, Optum Rx and CVS Caremark, per their attorneys. "These are the first such lawsuits that have been filed in the state of Michigan and probably more to come," said attorney Melvin Butch Hollowell of the Miller Law Firm. He described the allegations during a news conference, saying that nationally "the pharmacies and manufacturers get together. They control about 90% of the market each, of the insulin market. They talk to each other secretly. And they jack up the prices through anticompetitive means. And what we've seen is over the past 20 years, when we talk about jacking up the prices, they jack them up 1,500% in the last 20 years. 1,500%."
- text: Foreign governments may be spying on your smartphone notifications, senator says. Washington (CNN) — Foreign governments have reportedly attempted to spy on iPhone and Android users through the mobile app notifications they receive on their smartphones - and the US government has forced Apple and Google to keep quiet about it, according to a top US senator. Through legal demands sent to the tech giants, governments have allegedly tried to force Apple and Google to turn over sensitive information that could include the contents of a notification - such as previews of a text message displayed on a lock screen, or an update about app activity, Oregon Democratic Sen. Ron Wyden said in a new report. Wyden's report reflects the latest example of long-running tensions between tech companies and governments over law enforcement demands, which have stretched on for more than a decade. Governments around the world have particularly battled with tech companies over encryption, which provides critical protections to users and businesses while in some cases preventing law enforcement from pursuing investigations into messages sent over the internet.
- text: Tech giants ‘could severely disable UK spooks from stopping online harms’. Silicon Valley tech giants’ actions could “severely disable” UK spooks from preventing harm caused by online paedophiles and fraudsters, Suella Braverman has suggested. The Conservative former home secretary named Facebook owner Meta , and Apple, and their use of technologies such as end-to-end encryption as a threat to attempts to tackle digital crimes. She claimed the choice to back these technologies without “safeguards” could “enable and indeed facilitate some of the worst atrocities that our brave men and women in law enforcement agencies deal with every day”, as MPs began considering changes to investigatory powers laws. The Investigatory Powers (Amendment) Bill includes measures to make it easier for agencies to examine and retain bulk datasets, such as publicly available online telephone records, and would allow intelligence agencies to use internet connection records to aid detection of their targets. We know that the terrorists, the serious organised criminals, and fraudsters, and the online paedophiles, all take advantage of the dark web and encrypted spaces
- text: Camargo Corrêa asks Toffoli to suspend the fine agreed with Lava Jato. The Camargo Corrêa group has asked Justice Dias Toffoli to suspend the R$1.4 billion fine it agreed to pay in its leniency agreement under Operation Car Wash. The company asked for an extension of the minister's decisions that benefited J&F and Odebrecht. Like the other companies, it claimed that it suffered undue pressure from members of the Federal Public Prosecutor's Office (MPF) to close the deal. Much of the request is based on messages exchanged between prosecutors from the Curitiba task force and former judge Sergio Moro - Camargo Corrêa requested full access to the material, seized in Operation Spoofing, which arrested the hackers who broke into cell phones. The dialogues, according to the group's defense, indicate that the executives did not freely agree to the deal, since they were the targets of lawsuits and pre-trial detentions.
pipeline_tag: token-classification
base_model: numind/generic-entity_recognition_NER-v1
model-index:
- name: SpanMarker with numind/generic-entity_recognition-v1 on FewNERD
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: FewNERD
type: DFKI-SLT/few-nerd
split: eval
metrics:
- type: f1
value: 0.7039859923782059
name: F1
- type: precision
value: 0.7047408904377952
name: Precision
- type: recall
value: 0.7032327098380559
name: Recall
---
# SpanMarker with numind/generic-entity_recognition-v1 on FewNERD
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd) dataset that can be used for Named Entity Recognition. This SpanMarker model uses [numind/generic-entity_recognition_NER-v1](https://huggingface.co/numind/generic-entity_recognition_NER-v1) as the underlying encoder.
## Model Details
### Model Description
- **Model Type:** SpanMarker
- **Encoder:** [numind/generic-entity_recognition_NER-v1](https://huggingface.co/numind/generic-entity_recognition_NER-v1)
- **Maximum Sequence Length:** 256 tokens
- **Maximum Entity Length:** 8 words
- **Training Dataset:** [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd)
- **Language:** en
- **License:** cc-by-sa-4.0
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:-----------------------------------------|:---------------------------------------------------------------------------------------------------------|
| art-broadcastprogram | "Corazones", "The Gale Storm Show : Oh , Susanna", "Street Cents" |
| art-film | "Shawshank Redemption", "L'Atlantide", "Bosch" |
| art-music | "Hollywood Studio Symphony", "Atkinson , Danko and Ford ( with Brockie and Hilton )", "Champion Lover" |
| art-other | "The Today Show", "Venus de Milo", "Aphrodite of Milos" |
| art-painting | "Production/Reproduction", "Touit", "Cofiwch Dryweryn" |
| art-writtenart | "The Seven Year Itch", "Imelda de ' Lambertazzi", "Time" |
| building-airport | "Sheremetyevo International Airport", "Newark Liberty International Airport", "Luton Airport" |
| building-hospital | "Yeungnam University Hospital", "Hokkaido University Hospital", "Memorial Sloan-Kettering Cancer Center" |
| building-hotel | "The Standard Hotel", "Flamingo Hotel", "Radisson Blu Sea Plaza Hotel" |
| building-library | "British Library", "Bayerische Staatsbibliothek", "Berlin State Library" |
| building-other | "Henry Ford Museum", "Alpha Recording Studios", "Communiplex" |
| building-restaurant | "Carnegie Deli", "Fatburger", "Trumbull" |
| building-sportsfacility | "Boston Garden", "Sports Center", "Glenn Warner Soccer Facility" |
| building-theater | "Sanders Theatre", "National Paris Opera", "Pittsburgh Civic Light Opera" |
| event-attack/battle/war/militaryconflict | "Easter Offensive", "Jurist", "Vietnam War" |
| event-disaster | "the 1912 North Mount Lyell Disaster", "1990s North Korean famine", "1693 Sicily earthquake" |
| event-election | "Elections to the European Parliament", "March 1898 elections", "1982 Mitcham and Morden by-election" |
| event-other | "Union for a Popular Movement", "Masaryk Democratic Movement", "Eastwood Scoring Stage" |
| event-protest | "Iranian Constitutional Revolution", "French Revolution", "Russian Revolution" |
| event-sportsevent | "World Cup", "National Champions", "Stanley Cup" |
| location-GPE | "Croatian", "Mediterranean Basin", "the Republic of Croatia" |
| location-bodiesofwater | "Arthur Kill", "Atatürk Dam Lake", "Norfolk coast" |
| location-island | "new Samsat district", "Laccadives", "Staten Island" |
| location-mountain | "Salamander Glacier", "Miteirya Ridge", "Ruweisat Ridge" |
| location-other | "Victoria line", "Northern City Line", "Cartuther" |
| location-park | "Painted Desert Community Complex Historic District", "Gramercy Park", "Shenandoah National Park" |
| location-road/railway/highway/transit | "NJT", "Newark-Elizabeth Rail Link", "Friern Barnet Road" |
| organization-company | "Texas Chicken", "Dixy Chicken", "Church 's Chicken" |
| organization-education | "MIT", "Belfast Royal Academy and the Ulster College of Physical Education", "Barnard College" |
| organization-government/governmentagency | "Congregazione dei Nobili", "Diet", "Supreme Court" |
| organization-media/newspaper | "Clash", "Al Jazeera", "TimeOut Melbourne" |
| organization-other | "Defence Sector C", "IAEA", "4th Army" |
| organization-politicalparty | "Al Wafa ' Islamic", "Shimpotō", "Kenseitō" |
| organization-religion | "UPCUSA", "Christian", "Jewish" |
| organization-showorganization | "Lizzy", "Bochumer Symphoniker", "Mr. Mister" |
| organization-sportsleague | "China League One", "NHL", "First Division" |
| organization-sportsteam | "Arsenal", "Luc Alphand Aventures", "Tottenham" |
| other-astronomything | "Algol", "`` Caput Larvae ''", "Zodiac" |
| other-award | "Order of the Republic of Guinea and Nigeria", "Grand Commander of the Order of the Niger", "GCON" |
| other-biologything | "N-terminal lipid", "Amphiphysin", "BAR" |
| other-chemicalthing | "uranium", "carbon dioxide", "sulfur" |
| other-currency | "$", "lac crore", "Travancore Rupee" |
| other-disease | "bladder cancer", "French Dysentery Epidemic of 1779", "hypothyroidism" |
| other-educationaldegree | "BSc ( Hons ) in physics", "Bachelor", "Master" |
| other-god | "Raijin", "Fujin", "El" |
| other-language | "Breton-speaking", "Latin", "English" |
| other-law | "Leahy–Smith America Invents Act ( AIA", "United States Freedom Support Act", "Thirty Years ' Peace" |
| other-livingthing | "monkeys", "patchouli", "insects" |
| other-medical | "amitriptyline", "Pediatrics", "pediatrician" |
| person-actor | "Tchéky Karyo", "Edmund Payne", "Ellaline Terriss" |
| person-artist/author | "Hicks", "Gaetano Donizett", "George Axelrod" |
| person-athlete | "Tozawa", "Neville", "Jaguar" |
| person-director | "Richard Quine", "Bob Swaim", "Frank Darabont" |
| person-other | "Campbell", "Holden", "Richard Benson" |
| person-politician | "William", "Rivière", "Emeric" |
| person-scholar | "Wurdack", "Stalmine", "Stedman" |
| person-soldier | "Joachim Ziegler", "Helmuth Weidling", "Krukenberg" |
| product-airplane | "Spey-equipped FGR.2s", "EC135T2 CPDS", "Luton" |
| product-car | "Phantom", "100EX", "Corvettes - GT1 C6R" |
| product-food | "red grape", "yakiniku", "V. labrusca" |
| product-game | "Hardcore RPG", "Splinter Cell", "Airforce Delta" |
| product-other | "X11", "PDP-1", "Fairbottom Bobs" |
| product-ship | "Essex", "Congress", "HMS `` Chinkara ''" |
| product-software | "AmiPDF", "Wikipedia", "Apdf" |
| product-train | "55022", "Royal Scots Grey", "High Speed Trains" |
| product-weapon | "AR-15 's", "ZU-23-2MR Wróbel II", "ZU-23-2M Wróbel" |
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("guishe/span-marker-generic-ner-v1-fewnerd-fine-super")
# Run inference
entities = model.predict("Most of the Steven Seagal movie \"Under Siege \"(co-starring Tommy Lee Jones) was filmed on the, which is docked on Mobile Bay at Battleship Memorial Park and open to the public.")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("guishe/span-marker-generic-ner-v1-fewnerd-fine-super")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("guishe/span-marker-generic-ner-v1-fewnerd-fine-super-finetuned")
```
</details>
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:----|
| Sentence length | 1 | 24.4945 | 267 |
| Entities per sentence | 0 | 2.5832 | 88 |
### Training Hyperparameters
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training Results
| Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
|:------:|:-----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
| 0.2980 | 3000 | 0.0290 | 0.6503 | 0.6402 | 0.6452 | 0.9109 |
| 0.5961 | 6000 | 0.0250 | 0.6749 | 0.6794 | 0.6772 | 0.9202 |
| 0.8941 | 9000 | 0.0236 | 0.6908 | 0.6871 | 0.6889 | 0.9229 |
| 1.1921 | 12000 | 0.0234 | 0.6853 | 0.7007 | 0.6929 | 0.9239 |
| 1.4902 | 15000 | 0.0227 | 0.6966 | 0.6929 | 0.6948 | 0.9241 |
| 1.7882 | 18000 | 0.0221 | 0.7073 | 0.6922 | 0.6997 | 0.9250 |
| 2.0862 | 21000 | 0.0223 | 0.7003 | 0.6993 | 0.6998 | 0.9252 |
| 2.3843 | 24000 | 0.0222 | 0.6971 | 0.7027 | 0.6999 | 0.9254 |
| 2.6823 | 27000 | 0.0219 | 0.7044 | 0.7004 | 0.7024 | 0.9259 |
| 2.9803 | 30000 | 0.0219 | 0.7047 | 0.7032 | 0.7040 | 0.9261 |
### Framework Versions
- Python: 3.10.8
- SpanMarker: 1.5.0
- Transformers: 4.28.0
- PyTorch: 1.13.1+cu117
- Datasets: 2.14.4
- Tokenizers: 0.13.3
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
|
nicholasKluge/TeenyTinyLlama-460m | nicholasKluge | 2024-06-18T11:59:46Z | 553 | 7 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"pt",
"dataset:nicholasKluge/Pt-Corpus-Instruct",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-01-02T13:59:11Z | ---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- text-generation-inference
datasets:
- nicholasKluge/Pt-Corpus-Instruct
metrics:
- perplexity
pipeline_tag: text-generation
widget:
- text: 'A PUCRS é uma universidade '
example_title: Exemplo
- text: A muitos anos atrás, em uma galáxia muito distante, vivia uma raça de
example_title: Exemplo
- text: Em meio a um escândalo, a frente parlamentar pediu ao Senador Silva para
example_title: Exemplo
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 20
top_p: 0.2
max_new_tokens: 150
co2_eq_emissions:
emissions: 41100
source: CodeCarbon
training_type: pre-training
geographical_location: Germany
hardware_used: NVIDIA A100-SXM4-40GB
model-index:
- name: TeenyTinyLlama-460m
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 20.15
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 25.73
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 27.02
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 53.61
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 13.0
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 46.41
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 33.59
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 22.99
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia-temp/tweetsentbr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 17.28
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=nicholasKluge/TeenyTinyLlama-460m
name: Open Portuguese LLM Leaderboard
---
# TeenyTinyLlama-460m
<img src="./logo.png" alt="A curious llama exploring a mushroom forest." height="200">
## Model Summary
Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. Hence, we developed the _TeenyTinyLlama_ pair: two compact models for Brazilian Portuguese text generation.
Read our preprint on [Article](https://www.sciencedirect.com/science/article/pii/S2666827024000343).
## Details
- **Architecture:** a Transformer-based model pre-trained via causal language modeling
- **Size:** 468,239,360 parameters
- **Context length:** 2048 tokens
- **Dataset:** [Pt-Corpus Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) (6.2B tokens)
- **Language:** Portuguese
- **Number of steps:** 1,200,000
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Training time**: ~ 280 hours
- **Emissions:** 41.1 KgCO2 (Germany)
- **Total energy consumption:** 115.69 kWh
This repository has the [source code](https://github.com/Nkluge-correa/TeenyTinyLlama) used to train this model. The main libraries used are:
- [Transformers](https://github.com/huggingface/transformers)
- [PyTorch](https://github.com/pytorch/pytorch)
- [Datasets](https://github.com/huggingface/datasets)
- [Tokenizers](https://github.com/huggingface/tokenizers)
- [Sentencepiece](https://github.com/google/sentencepiece)
- [Accelerate](https://github.com/huggingface/accelerate)
- [FlashAttention](https://github.com/Dao-AILab/flash-attention)
- [Codecarbon](https://github.com/mlco2/codecarbon)
## Intended Uses
The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
## Out-of-scope Use
TeenyTinyLlama is not intended for deployment. It is not a product and should not be used for human-facing interactions.
TeenyTinyLlama models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages.
TeenyTinyLlama has not been fine-tuned for downstream contexts in which language models are commonly deployed.
## Basic usage
Using the `pipeline`:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="nicholasKluge/TeenyTinyLlama-460m")
completions = generator("Astronomia é a ciência", num_return_sequences=2, max_new_tokens=100)
for comp in completions:
print(f"🤖 {comp['generated_text']}")
```
Using the `AutoTokenizer` and `AutoModelForCausalLM`:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model and the tokenizer
tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/TeenyTinyLlama-460m", revision='main')
model = AutoModelForCausalLM.from_pretrained("nicholasKluge/TeenyTinyLlama-460m", revision='main')
# Pass the model to your device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.eval()
model.to(device)
# Tokenize the inputs and pass them to the device
inputs = tokenizer("Astronomia é a ciência", return_tensors="pt").to(device)
# Generate some text
completions = model.generate(**inputs, num_return_sequences=2, max_new_tokens=100)
# Print the generated text
for i, completion in enumerate(completions):
print(f'🤖 {tokenizer.decode(completion)}')
```
## Limitations
Like almost all other language models trained on large text datasets scraped from the web, the TTL pair exhibited behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.
- **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model.
## Evaluations
During our training runs, both models showed consistent convergence. At no point did our evaluation curves show signs of overfitting or saturation. In the case of our 460m parameter model, we intentionally trained past the optimal point by approximately 75,000 steps to assess if there were any signs of saturation, but our evaluations consistently gave better results. We hypothesize that our models are under-trained but can improve if further trained to pass the Chinchilla optimal range.
| Processed Tokens | Perplexity | Energy Consumption (kWh) | Emissions (KgCO2eq) |
|------------------|------------|---------------------------|----------------------|
| 8.1M | 20.49 | 9.40 | 3.34 |
| 1.6B | 16.90 | 18.82 | 6.70 |
| 2.4B | 15.43 | 28.59 | 10.16 |
| 3.2B | 14.64 | 38.20 | 13.57 |
| 4.0B | 14.08 | 48.04 | 17.07 |
| 4.9B | 13.61 | 57.74 | 20.52 |
| 5.7B | 13.25 | 67.32 | 23.92 |
| 6.5B | 12.87 | 76.84 | 27.30 |
| 7.3B | 12.57 | 86.40 | 30.70 |
| 8.1B | 12.27 | 96.19 | 34.18 |
| 9.0B | 11.96 | 106.06 | 37.70 |
| 9.8B | 11.77 | 115.69 | 41.31 |
## Benchmarks
Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | **Average** |
|------------------|-----------|---------------|-----------|----------------|-------------|
| Pythia-410m | 24.83* | 41.29* | 25.99* | 40.95* | 33.26 |
| **TTL-460m** | 29.40 | 33.00 | 28.55 | 41.10 | 33.01 |
| Bloom-560m | 24.74* | 37.15* | 24.22* | 42.44* | 32.13 |
| Xglm-564M | 25.56 | 34.64* | 25.18* | 42.53 | 31.97 |
| OPT-350m | 23.55* | 36.73* | 26.02* | 40.83* | 31.78 |
| **TTL-160m** | 26.15 | 29.29 | 28.11 | 41.12 | 31.16 |
| Pythia-160m | 24.06* | 31.39* | 24.86* | 44.34* | 31.16 |
| OPT-125m | 22.87* | 31.47* | 26.02* | 42.87* | 30.80 |
| GPorTuguese-2 | 22.48 | 29.62 | 27.36 | 41.44 | 30.22 |
| Gpt2-small | 21.48* | 31.60* | 25.79* | 40.65* | 29.97 |
| Multilingual GPT | 23.81 | 26.37* | 25.17* | 39.62 | 28.73 |
Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)).
| | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **OAB Exams** | **Average** |
|----------------|----------------|----------------|-----------|----------|----------------|------------|---------------|-------------|
| Qwen-1.8B | 64.83 | 19.53 | 26.15 | 30.23 | 43.97 | 33.33 | 27.20 | 35.03 |
| TinyLlama-1.1B | 58.93 | 13.57 | 22.81 | 22.25 | 43.97 | 36.92 | 23.64 | 31.72 |
| **TTL-460m** | 53.93 | 12.66 | 22.81 | 19.87 | 49.01 | 33.59 | 27.06 | 31.27 |
| XGLM-564m | 49.61 | 22.91 | 19.61 | 19.38 | 43.97 | 33.99 | 23.42 | 30.41 |
| Bloom-1b7 | 53.60 | 4.81 | 21.42 | 18.96 | 43.97 | 34.89 | 23.05 | 28.67 |
| **TTL-160m** | 53.36 | 2.58 | 21.84 | 18.75 | 43.97 | 36.88 | 22.60 | 28.56 |
| OPT-125m | 39.77 | 2.00 | 21.84 | 17.42 | 43.97 | 47.04 | 22.78 | 27.83 |
| Pythia-160 | 33.33 | 12.81 | 16.13 | 16.66 | 50.36 | 41.09 | 22.82 | 27.60 |
| OLMo-1b | 34.12 | 9.28 | 18.92 | 20.29 | 43.97 | 41.33 | 22.96 | 27.26 |
| Bloom-560m | 33.33 | 8.48 | 18.92 | 19.03 | 43.97 | 37.07 | 23.05 | 26.26 |
| Pythia-410m | 33.33 | 4.80 | 19.47 | 19.45 | 43.97 | 33.33 | 23.01 | 25.33 |
| OPT-350m | 33.33 | 3.65 | 20.72 | 17.35 | 44.71 | 33.33 | 23.01 | 25.15 |
| GPT-2 small | 33.26 | 0.00 | 10.43 | 11.20 | 43.52 | 33.68 | 13.12 | 20.74 |
| GPorTuguese | 33.33 | 3.85 | 14.74 | 3.01 | 28.81 | 33.33 | 21.23 | 19.75 |
| Samba-1.1B | 33.33 | 1.30 | 8.07 | 10.22 | 17.72 | 35.79 | 15.03 | 17.35 |
## Fine-Tuning Comparisons
To further evaluate the downstream capabilities of our models, we decided to employ a basic fine-tuning procedure for our TTL pair on a subset of tasks from the Poeta benchmark. We apply the same procedure for comparison purposes on both [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) models, given that they are also LLM trained from scratch in Brazilian Portuguese and have a similar size range to our models. We used these comparisons to assess if our pre-training runs produced LLM capable of producing good results ("good" here means "close to BERTimbau") when utilized for downstream applications.
| Models | IMDB | FaQuAD-NLI | HateBr | Assin2 | AgNews | Average |
|-----------------|-----------|------------|-----------|-----------|-----------|---------|
| BERTimbau-large | **93.58** | 92.26 | 91.57 | **88.97** | 94.11 | 92.10 |
| BERTimbau-small | 92.22 | **93.07** | 91.28 | 87.45 | 94.19 | 91.64 |
| **TTL-460m** | 91.64 | 91.18 | **92.28** | 86.43 | **94.42** | 91.19 |
| **TTL-160m** | 91.14 | 90.00 | 90.71 | 85.78 | 94.05 | 90.34 |
All the shown results are the higher accuracy scores achieved on the respective task test sets after fine-tuning the models on the training sets. All fine-tuning runs used the same hyperparameters, and the code implementation can be found in the [model cards](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-HateBR) of our fine-tuned models.
## Cite as 🤗
```latex
@misc{correa24ttllama,
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={arXiv preprint arXiv:2401.16640},
year={2024}
}
@misc{correa24ttllama,
doi = {10.1016/j.mlwa.2024.100558},
url = {https://www.sciencedirect.com/science/article/pii/S2666827024000343},
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={Machine Learning With Applications},
publisher = {Springer},
year={2024}
}
```
## Funding
This repository was built as part of the RAIES ([Rede de Inteligência Artificial Ética e Segura](https://www.raies.org/)) initiative, a project supported by FAPERGS - ([Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul](https://fapergs.rs.gov.br/inicial)), Brazil.
## License
TeenyTinyLlama-460m is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|
vinai/PhoWhisper-base | vinai | 2024-02-24T04:27:13Z | 553 | 5 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-02-18T05:01:16Z | # PhoWhisper: Automatic Speech Recognition for Vietnamese
We introduce **PhoWhisper** in five versions for Vietnamese automatic speech recognition. PhoWhisper's robustness is achieved through fine-tuning the multilingual [Whisper](https://github.com/openai/whisper) on an 844-hour dataset that encompasses diverse Vietnamese accents. Our experimental study demonstrates state-of-the-art performances of PhoWhisper on benchmark Vietnamese ASR datasets. Please **cite** our PhoWhisper paper when it is used to help produce published results or is incorporated into other software:
```
@inproceedings{PhoWhisper,
title = {{PhoWhisper: Automatic Speech Recognition for Vietnamese}},
author = {Thanh-Thien Le and Linh The Nguyen and Dat Quoc Nguyen},
booktitle = {Proceedings of the ICLR 2024 Tiny Papers track},
year = {2024}
}
```
For further information or requests, please go to [PhoWhisper's homepage](https://github.com/VinAIResearch/PhoWhisper)! |
Aravindan/gpt2out | Aravindan | 2024-06-06T15:58:35Z | 553 | 3 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:Aravindan/gpt2out",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-02T07:26:12Z | ---
license: mit
base_model: Aravindan/gpt2out
tags:
- generated_from_trainer
model-index:
- name: gpt2coder-8epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2coder-8epochs
This model is a fine-tuned version of [Aravindan/gpt2out](https://huggingface.co/Aravindan/gpt2out) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4708 | 0.9998 | 703 | 1.9618 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mradermacher/DarkForest-20B-v3.0-i1-GGUF | mradermacher | 2024-06-17T07:26:36Z | 553 | 2 | transformers | [
"transformers",
"gguf",
"merge",
"not-for-all-audiences",
"en",
"base_model:TeeZee/DarkForest-20B-v3.0",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-16T23:52:39Z | ---
base_model: TeeZee/DarkForest-20B-v3.0
language:
- en
library_name: transformers
license: other
license_name: microsoft-research-license
quantized_by: mradermacher
tags:
- merge
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TeeZee/DarkForest-20B-v3.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DarkForest-20B-v3.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-IQ1_S.gguf) | i1-IQ1_S | 4.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-IQ1_M.gguf) | i1-IQ1_M | 4.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-IQ2_S.gguf) | i1-IQ2_S | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-IQ2_M.gguf) | i1-IQ2_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-Q2_K.gguf) | i1-Q2_K | 7.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-IQ3_S.gguf) | i1-IQ3_S | 8.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-IQ3_M.gguf) | i1-IQ3_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-Q4_0.gguf) | i1-Q4_0 | 11.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 13.9 | |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/DarkForest-20B-v3.0-i1-GGUF/resolve/main/DarkForest-20B-v3.0.i1-Q6_K.gguf) | i1-Q6_K | 16.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
assemblyai/distilbert-base-uncased-sst2 | assemblyai | 2021-06-14T22:04:03Z | 552 | 2 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"arxiv:1910.01108",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | # DistilBERT-Base-Uncased for Sentiment Analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) originally released in ["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108) and trained on the [Stanford Sentiment Treebank v2 (SST2)](https://nlp.stanford.edu/sentiment/); part of the [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com) benchmark. This model was fine-tuned by the team at [AssemblyAI](https://www.assemblyai.com) and is released with the [corresponding blog post]().
## Usage
To download and utilize this model for sentiment analysis please execute the following:
```python
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("assemblyai/distilbert-base-uncased-sst2")
model = AutoModelForSequenceClassification.from_pretrained("assemblyai/distilbert-base-uncased-sst2")
tokenized_segments = tokenizer(["AssemblyAI is the best speech-to-text API for modern developers with performance being second to none!"], return_tensors="pt", padding=True, truncation=True)
tokenized_segments_input_ids, tokenized_segments_attention_mask = tokenized_segments.input_ids, tokenized_segments.attention_mask
model_predictions = F.softmax(model(input_ids=tokenized_segments_input_ids, attention_mask=tokenized_segments_attention_mask)['logits'], dim=1)
print("Positive probability: "+str(model_predictions[0][1].item()*100)+"%")
print("Negative probability: "+str(model_predictions[0][0].item()*100)+"%")
```
For questions about how to use this model feel free to contact the team at [AssemblyAI](https://www.assemblyai.com)! |
cahya/gpt2-medium-indonesian-story | cahya | 2023-08-13T00:33:14Z | 552 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | Entry not found |
axiong/PMC_LLaMA_13B | axiong | 2023-08-28T10:26:14Z | 552 | 25 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-08-28T05:38:32Z | ---
license: openrail
---
# PMC_LLaMA
To obtain the foundation model in medical field, we propose [MedLLaMA_13B](https://huggingface.co/chaoyi-wu/MedLLaMA_13B) and PMC_LLaMA_13B.
MedLLaMA_13B is initialized from LLaMA-13B and further pretrained with medical corpus. Despite the expert knowledge gained, it lacks instruction-following ability.
Hereby we construct a instruction-tuning dataset and evaluate the tuned model.
As shown in the table, PMC_LLaMA_13B achieves comparable results to ChatGPT on medical QA benchmarks.

## Usage
```python
import transformers
import torch
tokenizer = transformers.LlamaTokenizer.from_pretrained('axiong/PMC_LLaMA_13B')
model = transformers.LlamaForCausalLM.from_pretrained('axiong/PMC_LLaMA_13B')
sentence = 'Hello, doctor'
batch = tokenizer(
sentence,
return_tensors="pt",
add_special_tokens=False
)
with torch.no_grad():
generated = model.generate(
inputs = batch["input_ids"],
max_length=200,
do_sample=True,
top_k=50
)
print('model predict: ',tokenizer.decode(generated[0]))
```
|
LoftQ/Llama-2-7b-hf-4bit-64rank | LoftQ | 2024-05-03T21:19:15Z | 552 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"quantization ",
"lora",
"en",
"arxiv:2310.08659",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2023-11-21T05:07:49Z | ---
license: mit
language:
- en
pipeline_tag: text-generation
tags:
- 'quantization '
- lora
---
# LoftQ Initialization
| [Paper](https://arxiv.org/abs/2310.08659) | [Code](https://github.com/yxli2123/LoftQ) | [PEFT Example](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning) |
LoftQ (LoRA-fine-tuning-aware Quantization) provides a quantized backbone Q and LoRA adapters A and B, given a full-precision pre-trained weight W.
This model, `Llama-2-7b-hf-4bit-64rank`, is obtained from [LLAMA-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf).
The backbone is under `LoftQ/Llama-2-7b-hf-4bit-64rank` and LoRA adapters are under the `subfolder='loftq_init'`.
## Model Info
### Backbone
- Stored format: bitsandbytes nf4
- Size: ~ 4.2 GiB
- Loaded format: bitsandbytes nf4
- Size loaded on GPU: ~ 4.2 GiB
### LoRA adapters
- rank: 64
- lora_alpha: 16
- target_modules: ["down_proj", "up_proj", "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj"]
## Usage
**Training** Here's an example of loading this model and preparing for the LoRA fine-tuning.
```python
import torch
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
MODEL_ID = "LoftQ/Llama-2-7b-hf-4bit-64rank"
base_model = AutoModelForCausalLM.from_pretrained(MODEL_ID)
peft_model = PeftModel.from_pretrained(
base_model,
MODEL_ID,
subfolder="loftq_init",
is_trainable=True,
)
# Do training with peft_model ...
```
## Experiment Results
We have conducted experiments on supervised fine-tuning of [GSM8K](https://huggingface.co/datasets/gsm8k)
and [WikiText-2](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1).
| Model | Bits | Rank | LoRA Initial | GSM8K | WikiText-2 |
| -------------- | ---- | ---- | -------------------- | ----- | ---------- |
| LLAMA-2-7b | 16 | 64 | Gaussian + 0 | 36.9 | 5.08 |
| LLAMA-2-7b | 4 | 64 | Gaussian + 0 (QLoRA) | 35.1 | 5.70 |
| **LLAMA-2-7b** | 4 | 64 | LoftQ | 35.0 | 5.24 |
**Inference** Here is an example code for inference after the model has been fine-tuned on [GSM8K](https://huggingface.co/datasets/gsm8k).
```python
import torch
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
MODEL_ID = "LoftQ/Llama-2-7b-hf-4bit-64rank"
base_model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=torch.bfloat16, # you may change it with different models
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16, # bfloat16 is recommended
bnb_4bit_use_double_quant=False,
bnb_4bit_quant_type='nf4',
),
)
peft_model = PeftModel.from_pretrained(
base_model,
MODEL_ID,
subfolder="gsm8k",
is_trainable=True,
)
# Do inference with peft_model ...
```
See the full code at our [Github Repo]((https://github.com/yxli2123/LoftQ))
## Citation
```bibtex
@article{li2023loftq,
title={Loftq: Lora-fine-tuning-aware quantization for large language models},
author={Li, Yixiao and Yu, Yifan and Liang, Chen and He, Pengcheng and Karampatziakis, Nikos and Chen, Weizhu and Zhao, Tuo},
journal={arXiv preprint arXiv:2310.08659},
year={2023}
}
```
|
davisonye/cg_pad_mai_cla_conxt_dav_1 | davisonye | 2024-02-06T20:26:51Z | 552 | 0 | transformers | [
"transformers",
"safetensors",
"convnext",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-02-06T20:26:08Z | Entry not found |
mradermacher/reverie-7b-GGUF | mradermacher | 2024-05-06T05:08:20Z | 552 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:antiven0m/reverie-7b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-08T12:06:00Z | ---
base_model: antiven0m/reverie-7b
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/antiven0m/reverie-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/reverie-7b-GGUF/resolve/main/reverie-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
qihoo360/360Zhinao-1.8B-Reranking | qihoo360 | 2024-05-24T03:07:31Z | 552 | 3 | transformers | [
"transformers",
"pytorch",
"zhinao",
"text-generation",
"mteb",
"qihoo360",
"奇虎360",
"RAG-reranking",
"custom_code",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-05-20T03:24:18Z | ---
tags:
- mteb
- qihoo360
- 奇虎360
- RAG-reranking
model-index:
- name: 360Zhinao-1.8B-reranking
results:
- task:
type: Reranking
dataset:
type: None
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 86.75017961853382
- type: mrr
value: 89.15436507936508
- task:
type: Reranking
dataset:
type: None
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 87.91572151930174
- type: mrr
value: 89.98869047619048
- task:
type: Reranking
dataset:
type: None
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 37.28779203409935
- type: mrr
value: 36.23730158730159
- task:
type: Reranking
dataset:
type: None
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 68.55153559405632
- type: mrr
value: 79.62773774596725
license: apache-2.0
library_name: transformers
---
<br>
# MTEB Leaderboard Chinese Reranking Results
We have validated the performance of our model on the [mteb-chinese-reranking leaderboard](https://huggingface.co/spaces/mteb/leaderboard). Currently, the open-source models on this leaderboard are primarily bidirectional discriminative models (BERT-like models). The only unidirectional generative model (GPT-like model) is gte-Qwen1.5-7B-instruct, which has an average score of 66.38, ranking 25th, with less than ideal results. Our self-developed unidirectional generative model, 360Zhinao-1.8B-reranking, achieved an average score of 70.13, currently ranking first overall and first among open-source models, opening up new possibilities for generative models to undertake discriminative tasks. For more detail, please check [360zhinao](https://github.com/Qihoo360/360zhinao/tree/main).
| Model | T2Reranking | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|
| **360Zhinao-1.8B-Reranking** | **68.55** | **37.29** | **86.75** | **87.92** | **70.13** |
| piccolo-large-zh-v2 | 67.15 | 33.39 | 90.14 | 89.31 | 70 |
| Baichuan-text-embedding | 67.85 | 34.3 | 88.46 | 88.06 | 69.67 |
| stella-mrl-large-zh-v3.5-1792d | 66.43 | 28.85 | 89.18 | 89.33 | 68.45 |
| PEG | 69.43 | 33.55 | 86.56 | 84.09 | 68.41 |
| bge-reranker-base | 67.28 | 35.46 | 81.27 | 84.1 | 67.03 |
| bge-reranker-large | 67.6 | 37.17 | 82.14 | 84.19 | 67.78 |
# Requirements
```bash
pip install -r requirements.txt
```
If your GPU supports fp16 or bf16 precision, we also recommend installing [flash-attention](https://github.com/Dao-AILab/flash-attention) (**now with support for flash attention 2**) to improve your runtime efficiency and reduce memory usage. (**flash-attention is optional and not required for running this project**)
```bash
git clone https://github.com/Dao-AILab/flash-attention
cd flash-attention && pip install .
# The installation below is optional and might be slow.
# pip install csrc/layer_norm
# No need to install the following if the flash-attn version is above 2.1.1.
# pip install csrc/rotary
```
You can also use the following command to install flash-attention.
```bash
FLASH_ATTENTION_FORCE_BUILD=TRUE ./miniconda3/bin/python -m pip install flash-attn==2.3.6
```
# Model Introduction
The **360Zhinao-1.8B-Reranking** model utilizes the self-developed 360Zhinao-1.8b-Base model as its foundation. Through iterative discovery and resolution of the following technical issues, it continuously stimulates the world knowledge inherent in the large model during the pre-training phase, better bridging the gap between generative models and discriminative tasks.
## Data Processing
The model training did not utilize world knowledge, meaning it neither continue pre-training with domain-specific data nor fine-tuned datasets outside of the four datasets on the leaderboard. It only used the four datasets within the leaderboard, carefully iterating through data perception, data cleaning and data mining to ensure that the ranking in individual tasks could reach the top three level.
## Resolving Task Conflicts and Catastrophic Forgetting
When merging four tasks, due to different data domain distributions, answer patterns, training data volumes, convergence steps, and even sequence lengths, conflicts exist between different tasks. Deeply resolving these conflict issues is crucial to obtaining a universal model with the best comprehensive indicators across different tasks.
## Resolving Training Instability
Unlike generative tasks that produce multiple characters, using generative models for discriminative tasks requires the model to output a continuous value. Therefore, there is an oscillation problem during the training process. Deeply analyzing and resolving training instability can result in a model with better generalization and robustness.
# Inference Script
You can copy the following scripts to [mteb-eval-scripts](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB), then replace FlagReranker with FlagRerankerCustom in [eval_cross_encoder](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/eval_cross_encoder.py) scripts, then run [eval_cross_encoder](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/eval_cross_encoder.py) to reproduce our complete performance on the [mteb-chinese-reranking leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
```python
from typing import cast, List, Union, Tuple, Dict, Optional
import numpy as np
import torch
from tqdm import tqdm
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import transformers
from transformers.trainer_pt_utils import LabelSmoother
IGNORE_TOKEN_ID = LabelSmoother.ignore_index
def preprocess(
sources,
tokenizer: transformers.PreTrainedTokenizer,
max_len: int = 1024,
system_message: str = "",
device = None,
) -> Dict:
roles = {"user": "<|im_start|>user", "assistant": "<|im_start|>assistant"}
answer_len = 64
im_start = tokenizer.im_start_id
im_end = tokenizer.im_end_id
nl_tokens = tokenizer('\n').input_ids
_system = tokenizer('system').input_ids + nl_tokens
_user = tokenizer('user').input_ids + nl_tokens
_assistant = tokenizer('assistant').input_ids + nl_tokens
# Apply prompt templates
input_ids, targets = [], []
for i, source in enumerate(sources):
## system_message
input_id, target = [], []
system = [im_start] + _system + tokenizer(system_message, max_length=max_len-answer_len, truncation=True).input_ids + [im_end] + nl_tokens
input_id += system
target += [im_start] + [IGNORE_TOKEN_ID] * (len(system)-3) + [im_end] + nl_tokens
assert len(input_id) == len(target)
## query ans
source = "\n\n".join(source)
role = "<|im_start|>user"
_input_id = tokenizer(role, max_length=max_len-answer_len, truncation=True).input_ids + nl_tokens + \
tokenizer(source, max_length=max_len-answer_len, truncation=True).input_ids + [im_end] + nl_tokens
input_id += _input_id
if role == '<|im_start|>user':
_target = [im_start] + [IGNORE_TOKEN_ID] * (len(_input_id)-3) + [im_end] + nl_tokens
elif role == '<|im_start|>assistant':
_target = [im_start] + [IGNORE_TOKEN_ID] * len(tokenizer(role, max_length=max_len-answer_len, truncation=True).input_ids) + \
_input_id[len(tokenizer(role, max_length=max_len-answer_len, truncation=True).input_ids)+1:-2] + [im_end] + nl_tokens
else:
raise NotImplementedError
target += _target
## label use placeholder 0; It will be masked later in the modeling_zhinao.py
role = "<|im_start|>assistant"
_input_id = tokenizer(role, max_length=max_len-answer_len, truncation=True).input_ids + nl_tokens + \
tokenizer("0", max_length=max_len-answer_len, truncation=True).input_ids + [im_end] + nl_tokens
input_id += _input_id
if role == '<|im_start|>user':
_target = [im_start] + [IGNORE_TOKEN_ID] * (len(_input_id)-3) + [im_end] + nl_tokens
elif role == '<|im_start|>assistant':
_target = [im_start] + [IGNORE_TOKEN_ID] * len(tokenizer(role, max_length=max_len-answer_len, truncation=True).input_ids) + \
_input_id[len(tokenizer(role, max_length=max_len-answer_len, truncation=True).input_ids)+1:-2] + [im_end] + nl_tokens
else:
raise NotImplementedError
target += _target
assert len(input_id) == len(target)
input_id += [tokenizer.pad_token_id] * (max_len - len(input_id))
target += [IGNORE_TOKEN_ID] * (max_len - len(target))
if len(input_id) > max_len:
print("max_len_error")
print(tokenizer.decode(input_id))
input_ids.append(input_id[:max_len])
targets.append(target[:max_len])
input_ids = torch.tensor(input_ids, dtype=torch.int)
targets = torch.tensor(targets, dtype=torch.int)
#print(f"input_ids {input_ids.shape}")
#print(f"targets {targets.shape}")
return dict(
input_ids=input_ids.to(device),
labels=targets.to(device),
attention_mask=input_ids.ne(tokenizer.pad_token_id).to(device),
)
class FlagRerankerCustom:
def __init__(
self,
model_name_or_path: str = None,
use_fp16: bool = False
) -> None:
self.tokenizer = transformers.AutoTokenizer.from_pretrained(
model_name_or_path,
model_max_length=1024,
padding_side="right",
use_fast=False,
trust_remote_code=True
)
self.tokenizer.pad_token_id = self.tokenizer.eod_id
config = transformers.AutoConfig.from_pretrained(
model_name_or_path,
trust_remote_code=True,
bf16=True,
)
config.use_cache = False
self.model = transformers.AutoModelForCausalLM.from_pretrained(
model_name_or_path,
config=config,
trust_remote_code=True,
)
self.model.linear.bfloat16()
if torch.cuda.is_available():
self.device = torch.device('cuda')
elif torch.backends.mps.is_available():
self.device = torch.device('mps')
else:
self.device = torch.device('cpu')
use_fp16 = False
if use_fp16:
self.model.half()
self.model = self.model.to(self.device)
self.model.eval()
self.num_gpus = torch.cuda.device_count()
if self.num_gpus > 1:
print(f"----------using {self.num_gpus}*GPUs----------")
self.model = torch.nn.DataParallel(self.model)
@torch.no_grad()
def compute_score(self, sentence_pairs: Union[List[Tuple[str, str]], Tuple[str, str]], batch_size: int =128,
max_length: int = 1024) -> List[float]:
if self.num_gpus > 0:
batch_size = batch_size * self.num_gpus
assert isinstance(sentence_pairs, list)
if isinstance(sentence_pairs[0], str):
sentence_pairs = [sentence_pairs]
all_scores = []
for start_index in tqdm(range(0, len(sentence_pairs), batch_size), desc="Compute Scores",
disable=False):
sentences_batch = sentence_pairs[start_index:start_index + batch_size] # [[q,ans],[q, ans]...]
inputs = preprocess(sources=sentences_batch, tokenizer=self.tokenizer,max_len=1024,device=self.device)
scores = self.model(**inputs, return_dict=True).logits.view(-1, ).float()
all_scores.extend(scores.cpu().numpy().tolist())
if len(all_scores) == 1:
return all_scores[0]
return all_scores
if __name__ == "__main__":
model_name_or_path = "360Zhinao-1.8B-Reranking"
model = FlagRerankerCustom(model_name_or_path, use_fp16=False)
inputs=[["What Color Is the Sky","Blue"], ["What Color Is the Sky","Pink"],]
ret = model.compute_score(inputs)
print(ret)
```
## License
The source code of this repository follows the open-source license Apache 2.0.
360Zhinao open-source models support commercial use. If you wish to use these models or continue training them for commercial purposes, please contact us via email ([email protected]) to apply. For the specific license agreement, please see <<360 Zhinao Open-Source Model License>>.
|
DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1 | DiscoResearch | 2024-05-29T11:37:33Z | 552 | 12 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"de",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-24T07:36:23Z | ---
license: llama3
language:
- de
library_name: transformers
---
# # Llama3-DiscoLeo-Instruct 8B 32k-context (version 0.1)
## Thanks and Accreditation
[DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729)
is the result of a joint effort between [DiscoResearch](https://huggingface.co/DiscoResearch) and [Occiglot](https://huggingface.co/occiglot)
with support from the [DFKI](https://www.dfki.de/web/) (German Research Center for Artificial Intelligence) and [hessian.Ai](https://hessian.ai).
Occiglot kindly handled data preprocessing, filtering, and deduplication as part of their latest [dataset release](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5), as well as sharing their compute allocation at hessian.Ai's 42 Supercomputer.
## Model Overview
DiscoResearch/Llama3_DiscoLeo_Instruct_8B_32k_v0.1 is an instruction tuned version of our long-context [Llama3-German-8B-32k](https://huggingface.co/DiscoResearch/Llama3_German_8B_32k).
The base model was derived from [Meta's Llama3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) through continuous pretraining on 65 billion high-quality German tokens, similar to previous [LeoLM](https://huggingface.co/LeoLM) or [Occiglot](https://huggingface.co/collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01) models.
For the long-context version we trained on an additional 100 million tokens at 32k context length, using a rope_theta value of 1.5e6 and a learning rate of 1.5e-5 with a batch size of 256*8192 and otherwise equal hyperparameters to the base model.
We finetuned this checkpoint on the German Instruction dataset from DiscoResearch created by [Jan-Philipp Harries](https://huggingface.co/jphme) and [Daniel Auras](https://huggingface.co/rasdani) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)).
## How to use
Llama3_DiscoLeo_Instruct_8B_32k_v0.1 uses the [Llama-3 chat template](https://github.com/meta-llama/llama3?tab=readme-ov-file#instruction-tuned-models), which can be easily used with [transformer's chat templating](https://huggingface.co/docs/transformers/main/en/chat_templating).
See [below](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_Instruct_8B_32k_v0.1#usage-example) for a usage example.
## Model Training and Hyperparameters
The model was full-fintuned with axolotl on the [hessian.Ai 42](hessian.ai) with 32,768 context-length, learning rate 2e-5 and batch size of 16.
## Evaluation and Results
We evaluated the model using a suite of common English Benchmarks and their German counterparts with [GermanBench](https://github.com/bjoernpl/GermanBenchmark).
In the below image and corresponding table, you can see the benchmark scores for the different instruct models compared to Metas instruct version. All checkpoints are available in this [collection](https://huggingface.co/collections/DiscoResearch/discoleo-8b-llama3-for-german-6650527496c0fafefd4c9729).

| Model | truthful_qa_de | truthfulqa_mc | arc_challenge | arc_challenge_de | hellaswag | hellaswag_de | MMLU | MMLU-DE | mean |
|----------------------------------------------------|----------------|---------------|---------------|------------------|-------------|--------------|-------------|-------------|-------------|
| meta-llama/Meta-Llama-3-8B-Instruct | 0.47498 | 0.43923 | **0.59642** | 0.47952 | **0.82025** | 0.60008 | **0.66658** | 0.53541 | 0.57656 |
| DiscoResearch/Llama3-German-8B | 0.49499 | 0.44838 | 0.55802 | 0.49829 | 0.79924 | 0.65395 | 0.62240 | 0.54413 | 0.57743 |
| DiscoResearch/Llama3-German-8B-32k | 0.48920 | 0.45138 | 0.54437 | 0.49232 | 0.79078 | 0.64310 | 0.58774 | 0.47971 | 0.55982 |
| DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1 | **0.53042** | 0.52867 | 0.59556 | **0.53839** | 0.80721 | 0.66440 | 0.61898 | 0.56053 | **0.60552** |
| **DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1** | 0.52749 | **0.53245** | 0.58788 | 0.53754 | 0.80770 | **0.66709** | 0.62123 | **0.56238** | 0.60547 |
## Model Configurations
We release DiscoLeo-8B in the following configurations:
1. [Base model with continued pretraining](https://huggingface.co/DiscoResearch/Llama3-German_8B)
2. [Long-context version (32k context length)](https://huggingface.co/DiscoResearch/Llama3_German_8B_32k)
3. [Instruction-tuned version of the base model](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_Instruct_8B_v0.1)
4. [Instruction-tuned version of the long-context model](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_Instruct_8B_32k_v0.1) (This model)
5. [Experimental `DARE-TIES` Merge with Llama3-Instruct](https://huggingface.co/DiscoResearch/Llama3_DiscoLeo_8B_DARE_Experimental)
6. [Collection of Quantized versions](https://huggingface.co/collections/DiscoResearch/discoleo-8b-quants-6651bcf8f72c9a37ce485d42)
## Usage Example
Here's how to use the model with transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device="cuda"
model = AutoModelForCausalLM.from_pretrained(
"DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1")
prompt = "Schreibe ein Essay über die Bedeutung der Energiewende für Deutschlands Wirtschaft"
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Acknowledgements
The model was trained and evaluated by [Björn Plüster](https://huggingface.co/bjoernp) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)) with data preparation and project supervision by [Manuel Brack](http://manuel-brack.eu) ([DFKI](https://www.dfki.de/web/), [TU-Darmstadt](https://www.tu-darmstadt.de/)). Initial work on dataset collection and curation was performed by [Malte Ostendorff](https://ostendorff.org) and [Pedro Ortiz Suarez](https://portizs.eu). Instruction tuning was done with the DiscoLM German dataset created by [Jan-Philipp Harries](https://huggingface.co/jphme) and [Daniel Auras](https://huggingface.co/rasdani) ([DiscoResearch](https://huggingface.co/DiscoResearch), [ellamind](https://ellamind.com)). We extend our gratitude to [LAION](https://laion.ai/) and friends, especially [Christoph Schuhmann](https://entwickler.de/experten/christoph-schuhmann) and [Jenia Jitsev](https://huggingface.co/JJitsev), for initiating this collaboration.
The model training was supported by a compute grant at the [42 supercomputer](https://hessian.ai/) which is a central component in the development of [hessian AI](https://hessian.ai/), the [AI Innovation Lab](https://hessian.ai/infrastructure/ai-innovationlab/) (funded by the [Hessian Ministry of Higher Education, Research and the Art (HMWK)](https://wissenschaft.hessen.de) & the [Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)](https://innen.hessen.de)) and the [AI Service Centers](https://hessian.ai/infrastructure/ai-service-centre/) (funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)).
The curation of the training data is partially funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)
through the project [OpenGPT-X](https://opengpt-x.de/en/) (project no. 68GX21007D). |
jampekka/4chan_out | jampekka | 2024-05-24T11:53:37Z | 552 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-24T11:16:06Z | ---
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: 4chan_out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 4chan_out
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
adamzinebi/gpt2_pad_token | adamzinebi | 2024-06-02T10:57:37Z | 552 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-02T10:57:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Felladrin/gguf-Aira-2-355M | Felladrin | 2024-06-07T09:58:28Z | 552 | 0 | null | [
"gguf",
"base_model:nicholasKluge/Aira-2-355M",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-07T09:49:51Z | ---
license: apache-2.0
base_model: nicholasKluge/Aira-2-355M
---
GGUF version of [nicholasKluge/Aira-2-355M](https://huggingface.co/nicholasKluge/Aira-2-355M).
|
sonoisa/t5-base-japanese-title-generation | sonoisa | 2022-02-21T13:38:09Z | 551 | 3 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"seq2seq",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
language: ja
tags:
- t5
- text2text-generation
- seq2seq
license: cc-by-sa-4.0
---
# 記事本文からタイトルを生成するモデル
SEE: https://qiita.com/sonoisa/items/a9af64ff641f0bbfed44 |
bigscience/mt0-xxl-mt | bigscience | 2024-06-07T09:57:51Z | 551 | 48 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"af",
"am",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"co",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fil",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"haw",
"hi",
"hmn",
"ht",
"hu",
"hy",
"ig",
"is",
"it",
"iw",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lb",
"lo",
"lt",
"lv",
"mg",
"mi",
"mk",
"ml",
"mn",
"mr",
"ms",
"mt",
"my",
"ne",
"nl",
"no",
"ny",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sd",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"st",
"su",
"sv",
"sw",
"ta",
"te",
"tg",
"th",
"tr",
"uk",
"und",
"ur",
"uz",
"vi",
"xh",
"yi",
"yo",
"zh",
"zu",
"dataset:bigscience/xP3mt",
"dataset:mc4",
"arxiv:2211.01786",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-27T21:21:14Z | ---
datasets:
- bigscience/xP3mt
- mc4
license: apache-2.0
language:
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- 'no'
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
tags:
- text2text-generation
widget:
- text: Life is beautiful! Translate to Mongolian.
example_title: mn-en translation
- text: Le mot japonais «憂鬱» veut dire quoi en Odia?
example_title: jp-or-fr translation
- text: >-
Stell mir eine schwierige Quiz Frage bei der es um Astronomie geht. Bitte
stell die Frage auf Norwegisch.
example_title: de-nb quiz
- text: >-
一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous
review as positive, neutral or negative?
example_title: zh-en sentiment
- text: 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
example_title: zh-zh sentiment
- text: Suggest at least five related search terms to "Mạng neural nhân tạo".
example_title: vi-en query
- text: >-
Proposez au moins cinq mots clés concernant «Réseau de neurones
artificiels».
example_title: fr-fr query
- text: Explain in a sentence in Telugu what is backpropagation in neural networks.
example_title: te-en qa
- text: Why is the sky blue?
example_title: en-en qa
- text: >-
Write a fairy tale about a troll saving a princess from a dangerous dragon.
The fairy tale is a masterpiece that has achieved praise worldwide and its
moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
example_title: es-en fable
- text: >-
Write a fable about wood elves living in a forest that is suddenly invaded
by ogres. The fable is a masterpiece that has achieved praise worldwide and
its moral is "Violence is the last refuge of the incompetent". Fable (in
Hindi):
example_title: hi-en fable
model-index:
- name: mt0-xxl-mt
results:
- task:
type: Coreference resolution
dataset:
type: winogrande
name: Winogrande XL (xl)
config: xl
split: validation
revision: a80f460359d1e9a67c006011c94de42a8759430c
metrics:
- type: Accuracy
value: 62.67
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (en)
config: en
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 83.31
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (fr)
config: fr
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 78.31
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (jp)
config: jp
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 80.19
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (pt)
config: pt
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 80.99
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (ru)
config: ru
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 79.05
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (zh)
config: zh
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 82.34
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r1)
config: r1
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 49.5
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r2)
config: r2
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 42
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r3)
config: r3
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 48.17
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (cb)
config: cb
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 87.5
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (rte)
config: rte
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 84.84
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ar)
config: ar
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 58.03
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (bg)
config: bg
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 59.92
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (de)
config: de
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 60.16
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (el)
config: el
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 59.2
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (en)
config: en
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 62.25
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (es)
config: es
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 60.92
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (fr)
config: fr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 59.88
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (hi)
config: hi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 57.47
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ru)
config: ru
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 58.67
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (sw)
config: sw
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 56.79
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (th)
config: th
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 58.03
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (tr)
config: tr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 57.67
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ur)
config: ur
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 55.98
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (vi)
config: vi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 58.92
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (zh)
config: zh
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 58.71
- task:
type: Sentence completion
dataset:
type: story_cloze
name: StoryCloze (2016)
config: '2016'
split: validation
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
metrics:
- type: Accuracy
value: 94.66
- task:
type: Sentence completion
dataset:
type: super_glue
name: SuperGLUE (copa)
config: copa
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 88
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (et)
config: et
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 81
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ht)
config: ht
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 79
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (id)
config: id
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 90
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (it)
config: it
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 88
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (qu)
config: qu
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (sw)
config: sw
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 81
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ta)
config: ta
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 81
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (th)
config: th
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 76
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (tr)
config: tr
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 76
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (vi)
config: vi
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 85
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (zh)
config: zh
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 87
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ar)
config: ar
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 91
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (es)
config: es
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 93.38
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (eu)
config: eu
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 91.13
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (hi)
config: hi
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 90.73
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (id)
config: id
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 93.05
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (my)
config: my
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 86.7
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ru)
config: ru
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 91.66
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (sw)
config: sw
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 89.61
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (te)
config: te
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 90.4
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (zh)
config: zh
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 93.05
pipeline_tag: text2text-generation
---

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find our resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
- **Languages:** Refer to [mc4](https://huggingface.co/datasets/mc4) for pretraining & [xP3](https://huggingface.co/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
- Suggest at least five related search terms to "Mạng neural nhân tạo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-xxl-mt"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-xxl-mt"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "bigscience/mt0-xxl-mt"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [mt5-xxl](https://huggingface.co/google/mt5-xxl), also refer to the `config.json` file
- **Finetuning steps:** 7000
- **Finetuning tokens:** 1.29 billion
- **Precision:** bfloat16
## Hardware
- **TPUs:** TPUv4-256
## Software
- **Orchestration:** [T5X](https://github.com/google-research/t5x)
- **Neural networks:** [Jax](https://github.com/google/jax)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
``` |
keremberke/yolov5m-forklift | keremberke | 2023-01-01T20:53:42Z | 551 | 1 | yolov5 | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/forklift-object-detection",
"model-index",
"region:us"
]
| object-detection | 2023-01-01T20:35:43Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/forklift-object-detection
model-index:
- name: keremberke/yolov5m-forklift
results:
- task:
type: object-detection
dataset:
type: keremberke/forklift-object-detection
name: keremberke/forklift-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.8515819366709647 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5m-forklift" src="https://huggingface.co/keremberke/yolov5m-forklift/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5m-forklift')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5m-forklift --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)** |
timm/res2net50_26w_6s.in1k | timm | 2023-04-24T00:05:41Z | 551 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1904.01169",
"license:unknown",
"region:us"
]
| image-classification | 2023-04-24T00:05:04Z | ---
tags:
- image-classification
- timm
library_name: timm
license: unknown
datasets:
- imagenet-1k
---
# Model card for res2net50_26w_6s.in1k
A Res2Net (Multi-Scale ResNet) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 37.1
- GMACs: 6.3
- Activations (M): 15.3
- Image size: 224 x 224
- **Papers:**
- Res2Net: A New Multi-scale Backbone Architecture: https://arxiv.org/abs/1904.01169
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/gasvn/Res2Net/
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('res2net50_26w_6s.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'res2net50_26w_6s.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'res2net50_26w_6s.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{gao2019res2net,
title={Res2Net: A New Multi-scale Backbone Architecture},
author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
journal={IEEE TPAMI},
doi={10.1109/TPAMI.2019.2938758},
}
```
|
timm/poolformerv2_s36.sail_in1k | timm | 2023-05-05T06:20:58Z | 551 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-05-05T06:20:23Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for poolformerv2_s36.sail_in1k
A PoolFormer-v2 (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 30.8
- GMACs: 5.0
- Activations (M): 15.8
- Image size: 224 x 224
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('poolformerv2_s36.sail_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'poolformerv2_s36.sail_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 320, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'poolformerv2_s36.sail_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
|
davanstrien/chat_topics | davanstrien | 2023-06-01T08:10:20Z | 551 | 2 | bertopic | [
"bertopic",
"text-classification",
"en",
"dataset:OpenAssistant/oasst1",
"license:mit",
"region:us"
]
| text-classification | 2023-05-29T18:39:42Z | ---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
license: mit
datasets:
- OpenAssistant/oasst1
language:
- en
---
# chat_topics
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("davanstrien/chat_topics")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 75
* Number of training documents: 63530
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | provide - using - information - sure - help | 26 | -1_provide_using_information_sure |
| 0 | openai - ai - chatgpt - assistant - language | 7837 | Generative AI |
| 1 | anytime - welcome - assistance - helpful - thank | 1342 | 1_anytime_welcome_assistance_helpful |
| 2 | quantum - particle - physics - particles - relativity | 778 | Physics |
| 3 | story - lived - life - novel - felt | 569 | 3_story_lived_life_novel |
| 4 | letter - sincerely - regards - email - dear | 516 | 4_letter_sincerely_regards_email |
| 5 | rust - haskell - programming - java - languages | 504 | programming |
| 6 | css - html - style - div - js | 494 | web programming |
| 7 | linux - ubuntu - debian - fedora - install | 440 | 7_linux_ubuntu_debian_fedora |
| 8 | recipe - bake - ingredients - baking - dough | 425 | 8_recipe_bake_ingredients_baking |
| 9 | websocket - json - socket - api - discord | 425 | 9_websocket_json_socket_api |
| 10 | communism - capitalism - marx - economic - economy | 424 | 10_communism_capitalism_marx_economic |
| 11 | dog - pet - breed - breeds - pets | 408 | 11_dog_pet_breed_breeds |
| 12 | philosophy - theological - philosophical - beliefs - consciousness | 394 | 12_philosophy_theological_philosophical_beliefs |
| 13 | git - github - repository - software - commit | 381 | 13_git_github_repository_software |
| 14 | music - songs - musical - lyrics - song | 370 | 14_music_songs_musical_lyrics |
| 15 | devops - development - developers - industry - develop | 323 | 15_devops_development_developers_industry |
| 16 | pythagorean - hypotenuse - triangle - math - sqrt | 302 | 16_pythagorean_hypotenuse_triangle_math |
| 17 | eu - europe - economy - economic - war | 291 | 17_eu_europe_economy_economic |
| 18 | sleep - asleep - bedtime - procrastination - depression | 280 | 18_sleep_asleep_bedtime_procrastination |
| 19 | kramer - seinfeld - jerry - cafe - elaine | 279 | 19_kramer_seinfeld_jerry_cafe |
| 20 | printing - prints - printer - print - printers | 276 | 20_printing_prints_printer_print |
| 21 | influenza - flu - panic - symptoms - medical | 251 | 21_influenza_flu_panic_symptoms |
| 22 | chess - chessboard - practice - strategy - learn | 242 | 22_chess_chessboard_practice_strategy |
| 23 | algorithm - primes - array - integers - python | 240 | 23_algorithm_primes_array_integers |
| 24 | youtube - viewers - media - google - streaming | 240 | 24_youtube_viewers_media_google |
| 25 | poison - chemicals - powder - turpentine - smoke | 226 | 25_poison_chemicals_powder_turpentine |
| 26 | monday - sunday - count_weekend_days - calendar - dates | 216 | 26_monday_sunday_count_weekend_days_calendar |
| 27 | colors - colour - color - pigments - blue | 208 | 27_colors_colour_color_pigments |
| 28 | roman - attila - rome - empire - warfare | 205 | 28_roman_attila_rome_empire |
| 29 | investing - investments - investment - stocks - financial | 204 | 29_investing_investments_investment_stocks |
| 30 | vocabulary - wordle - words - scrabble - word | 201 | 30_vocabulary_wordle_words_scrabble |
| 31 | planets - sun - earth - planet - pluto | 198 | 31_planets_sun_earth_planet |
| 32 | renewable - solar - electricity - energy - electrical | 190 | 32_renewable_solar_electricity_energy |
| 33 | pygame - ball_radius - draw - circle - canvas | 181 | 33_pygame_ball_radius_draw_circle |
| 34 | fishing - fish - boat - hiking - camping | 176 | 34_fishing_fish_boat_hiking |
| 35 | gpus - gpu - motherboard - cpu - hardware | 162 | 35_gpus_gpu_motherboard_cpu |
| 36 | hvac - remodeling - energy - kwh - housing | 159 | 36_hvac_remodeling_energy_kwh |
| 37 | database - graphql - databases - postgresql - sql | 159 | 37_database_graphql_databases_postgresql |
| 38 | información - significado - cómo - como - sistemas | 158 | 38_información_significado_cómo_como |
| 39 | motherboard - pcie - gpu - bios - computer | 153 | 39_motherboard_pcie_gpu_bios |
| 40 | crops - produce - planting - peppers - plants | 148 | 40_crops_produce_planting_peppers |
| 41 | paintings - art - modernist - artists - modern | 148 | 41_paintings_art_modernist_artists |
| 42 | workout - exercises - dumbbells - dumbbell - exercise | 147 | 42_workout_exercises_dumbbells_dumbbell |
| 43 | climate - warming - pollution - environmental - emissions | 142 | 43_climate_warming_pollution_environmental |
| 44 | coffee - espresso - brewing - tea - beans | 137 | 44_coffee_espresso_brewing_tea |
| 45 | velocity - drag - acceleration - density - formula | 132 | 45_velocity_drag_acceleration_density |
| 46 | woodchuck - woodchucks - units - kilogram - kilograms | 130 | 46_woodchuck_woodchucks_units_kilogram |
| 47 | ascii - glyphs - hiragana - art - font | 129 | 47_ascii_glyphs_hiragana_art |
| 48 | guitars - guitar - strings - guitarists - instrument | 127 | 48_guitars_guitar_strings_guitarists |
| 49 | tallest - buildings - building - burj - khalifa | 114 | 49_tallest_buildings_building_burj |
| 50 | flat - earth - curvature - spherical - tectonic | 111 | 50_flat_earth_curvature_spherical |
| 51 | essay - awareness - understanding - being - be | 102 | 51_essay_awareness_understanding_being |
| 52 | portals - ender - portal - obsidian - netherite | 102 | 52_portals_ender_portal_obsidian |
| 53 | android - apple - phones - devices - vehicles | 101 | 53_android_apple_phones_devices |
| 54 | fasting - dietary - diet - eating - metabolic | 101 | 54_fasting_dietary_diet_eating |
| 55 | meditation - relief - pain - health - nociception | 99 | 55_meditation_relief_pain_health |
| 56 | weather - forecast - forecasts - raining - precipitation | 95 | 56_weather_forecast_forecasts_raining |
| 57 | president - presidents - presidency - constitution - biden | 94 | 57_president_presidents_presidency_constitution |
| 58 | no - nope - yes - not - maybe | 94 | 58_no_nope_yes_not |
| 59 | peregrine - airspeed - falcon - speed - bird | 90 | 59_peregrine_airspeed_falcon_speed |
| 60 | crontab - cron - myscript - script - bash | 83 | 60_crontab_cron_myscript_script |
| 61 | youtuber - streamer - ceo - musk - founder | 83 | 61_youtuber_streamer_ceo_musk |
| 62 | layovers - flights - circumnavigate - layover - travel | 83 | 62_layovers_flights_circumnavigate_layover |
| 63 | keyboards - keyboard - switches - qwerty - types | 83 | 63_keyboards_keyboard_switches_qwerty |
| 64 | file_path_in_dir1 - file_path1 - csv_file - file_path_in_dir2 - file_path2 | 80 | 64_file_path_in_dir1_file_path1_csv_file_file_path_in_dir2 |
| 65 | pele - maradona - lebron - ronaldo - nba | 76 | 65_pele_maradona_lebron_ronaldo |
| 66 | alopecia - hairstyles - hairstyle - hair - scalp | 66 | 66_alopecia_hairstyles_hairstyle_hair |
| 67 | nginx - docker - kubernetes - proxy_pass - nodeport | 65 | 67_nginx_docker_kubernetes_proxy_pass |
| 68 | directories - directory - sudo - filesystem - folders | 62 | 68_directories_directory_sudo_filesystem |
| 69 | gps - map - geocaching - maps - armenia | 52 | 69_gps_map_geocaching_maps |
| 70 | meiosis - mitosis - fertilization - reproduction - ovulation | 51 | 70_meiosis_mitosis_fertilization_reproduction |
| 71 | colleges - admissions - universities - campus - university | 43 | 71_colleges_admissions_universities_campus |
| 72 | unicorns - unicorn - pony - ponies - mythical | 32 | 72_unicorns_unicorn_pony_ponies |
| 73 | superpowers - abilities - superhero - superhuman - powers | 28 | 73_superpowers_abilities_superhero_superhuman |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 20
* n_gram_range: (1, 1)
* nr_topics: 75
* seed_topic_list: None
* top_n_words: 10
* verbose: True
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.29.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.11 |
mradermacher/Uncensored-Frank-Llama-3-8B-GGUF | mradermacher | 2024-05-05T14:44:24Z | 551 | 2 | transformers | [
"transformers",
"gguf",
"Uncensored conversation",
"Uncensored jokes",
"Uncensored romance",
"en",
"base_model:ajibawa-2023/Uncensored-Frank-Llama-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-04T12:46:02Z | ---
base_model: ajibawa-2023/Uncensored-Frank-Llama-3-8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- Uncensored conversation
- Uncensored jokes
- Uncensored romance
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ajibawa-2023/Uncensored-Frank-Llama-3-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Frank-Llama-3-8B-GGUF/resolve/main/Uncensored-Frank-Llama-3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
saishf/Aura-Uncensored-OAS-8B-L3 | saishf | 2024-05-12T14:28:09Z | 551 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Undi95/Llama3-Unholy-8B-OAS",
"base_model:ResplendentAI/Theory_of_Mind_Llama3",
"base_model:ResplendentAI/Luna_Llama3",
"base_model:ResplendentAI/Aura_Llama3",
"base_model:ResplendentAI/BlueMoon_Llama3",
"base_model:ResplendentAI/RP_Format_QuoteAsterisk_Llama3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-12T14:13:55Z | ---
license: cc-by-nc-4.0
base_model:
- Undi95/Llama3-Unholy-8B-OAS
- ResplendentAI/Theory_of_Mind_Llama3
- Undi95/Llama3-Unholy-8B-OAS
- ResplendentAI/Luna_Llama3
- Undi95/Llama3-Unholy-8B-OAS
- Undi95/Llama3-Unholy-8B-OAS
- ResplendentAI/Aura_Llama3
- Undi95/Llama3-Unholy-8B-OAS
- ResplendentAI/BlueMoon_Llama3
- Undi95/Llama3-Unholy-8B-OAS
- ResplendentAI/RP_Format_QuoteAsterisk_Llama3
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
This is a rerun of [ResplendentAI/Aura_Uncensored_l3_8B](https://huggingface.co/ResplendentAI/Aura_Uncensored_l3_8B) using [Undi95/Llama3-Unholy-8B-OAS](https://huggingface.co/Undi95/Llama3-Unholy-8B-OAS) instead of [Undi95/Llama-3-Unholy-8B](https://huggingface.co/Undi95/Llama-3-Unholy-8B) in the hopes it will result in a less censored, uncensored aura.
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Undi95/Llama3-Unholy-8B-OAS](https://huggingface.co/Undi95/Llama3-Unholy-8B-OAS) as a base.
### Models Merged
The following models were included in the merge:
* [Undi95/Llama3-Unholy-8B-OAS](https://huggingface.co/Undi95/Llama3-Unholy-8B-OAS) + [ResplendentAI/Theory_of_Mind_Llama3](https://huggingface.co/ResplendentAI/Theory_of_Mind_Llama3)
* [Undi95/Llama3-Unholy-8B-OAS](https://huggingface.co/Undi95/Llama3-Unholy-8B-OAS) + [ResplendentAI/Luna_Llama3](https://huggingface.co/ResplendentAI/Luna_Llama3)
* [Undi95/Llama3-Unholy-8B-OAS](https://huggingface.co/Undi95/Llama3-Unholy-8B-OAS) + [ResplendentAI/Aura_Llama3](https://huggingface.co/ResplendentAI/Aura_Llama3)
* [Undi95/Llama3-Unholy-8B-OAS](https://huggingface.co/Undi95/Llama3-Unholy-8B-OAS) + [ResplendentAI/BlueMoon_Llama3](https://huggingface.co/ResplendentAI/BlueMoon_Llama3)
* [Undi95/Llama3-Unholy-8B-OAS](https://huggingface.co/Undi95/Llama3-Unholy-8B-OAS) + [ResplendentAI/RP_Format_QuoteAsterisk_Llama3](https://huggingface.co/ResplendentAI/RP_Format_QuoteAsterisk_Llama3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Undi95/Llama3-Unholy-8B-OAS+ResplendentAI/Theory_of_Mind_Llama3
- model: Undi95/Llama3-Unholy-8B-OAS+ResplendentAI/Aura_Llama3
- model: Undi95/Llama3-Unholy-8B-OAS+ResplendentAI/Luna_Llama3
- model: Undi95/Llama3-Unholy-8B-OAS+ResplendentAI/BlueMoon_Llama3
- model: Undi95/Llama3-Unholy-8B-OAS+ResplendentAI/RP_Format_QuoteAsterisk_Llama3
merge_method: model_stock
base_model: Undi95/Llama3-Unholy-8B-OAS
dtype: float16
```
|
bartowski/Quintellect-10.7B-GGUF | bartowski | 2024-05-17T13:33:42Z | 551 | 0 | null | [
"gguf",
"code",
"text-generation",
"en",
"dataset:sahil2801/CodeAlpaca-20k",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-05-17T12:59:27Z | ---
license: apache-2.0
tags:
- code
datasets:
- sahil2801/CodeAlpaca-20k
language:
- en
inference: false
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Quintellect-10.7B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2901">b2901</a> for quantization.
Original model: https://huggingface.co/Walmart-the-bag/Quintellect-10.7B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
{system_prompt}
Human: {prompt}
Assistant: </s>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Quintellect-10.7B-Q8_0.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-Q8_0.gguf) | Q8_0 | 11.40GB | Extremely high quality, generally unneeded but max available quant. |
| [Quintellect-10.7B-Q6_K.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-Q6_K.gguf) | Q6_K | 8.80GB | Very high quality, near perfect, *recommended*. |
| [Quintellect-10.7B-Q5_K_M.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-Q5_K_M.gguf) | Q5_K_M | 7.59GB | High quality, *recommended*. |
| [Quintellect-10.7B-Q5_K_S.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-Q5_K_S.gguf) | Q5_K_S | 7.39GB | High quality, *recommended*. |
| [Quintellect-10.7B-Q4_K_M.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-Q4_K_M.gguf) | Q4_K_M | 6.46GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Quintellect-10.7B-Q4_K_S.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-Q4_K_S.gguf) | Q4_K_S | 6.11GB | Slightly lower quality with more space savings, *recommended*. |
| [Quintellect-10.7B-IQ4_NL.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-IQ4_NL.gguf) | IQ4_NL | 6.09GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Quintellect-10.7B-IQ4_XS.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-IQ4_XS.gguf) | IQ4_XS | 5.77GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Quintellect-10.7B-Q3_K_L.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-Q3_K_L.gguf) | Q3_K_L | 5.65GB | Lower quality but usable, good for low RAM availability. |
| [Quintellect-10.7B-Q3_K_M.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-Q3_K_M.gguf) | Q3_K_M | 5.19GB | Even lower quality. |
| [Quintellect-10.7B-IQ3_M.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-IQ3_M.gguf) | IQ3_M | 4.84GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Quintellect-10.7B-IQ3_S.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-IQ3_S.gguf) | IQ3_S | 4.69GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Quintellect-10.7B-Q3_K_S.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-Q3_K_S.gguf) | Q3_K_S | 4.66GB | Low quality, not recommended. |
| [Quintellect-10.7B-IQ3_XS.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-IQ3_XS.gguf) | IQ3_XS | 4.44GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Quintellect-10.7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-IQ3_XXS.gguf) | IQ3_XXS | 4.16GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Quintellect-10.7B-Q2_K.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-Q2_K.gguf) | Q2_K | 4.00GB | Very low quality but surprisingly usable. |
| [Quintellect-10.7B-IQ2_M.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-IQ2_M.gguf) | IQ2_M | 3.67GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Quintellect-10.7B-IQ2_S.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-IQ2_S.gguf) | IQ2_S | 3.39GB | Very low quality, uses SOTA techniques to be usable. |
| [Quintellect-10.7B-IQ2_XS.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-IQ2_XS.gguf) | IQ2_XS | 3.23GB | Very low quality, uses SOTA techniques to be usable. |
| [Quintellect-10.7B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-IQ2_XXS.gguf) | IQ2_XXS | 2.92GB | Lower quality, uses SOTA techniques to be usable. |
| [Quintellect-10.7B-IQ1_M.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-IQ1_M.gguf) | IQ1_M | 2.56GB | Extremely low quality, *not* recommended. |
| [Quintellect-10.7B-IQ1_S.gguf](https://huggingface.co/bartowski/Quintellect-10.7B-GGUF/blob/main/Quintellect-10.7B-IQ1_S.gguf) | IQ1_S | 2.35GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Quintellect-10.7B-GGUF --include "Quintellect-10.7B-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Quintellect-10.7B-GGUF --include "Quintellect-10.7B-Q8_0.gguf/*" --local-dir Quintellect-10.7B-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (Quintellect-10.7B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
liuch37/controlnet-sd-2-1-base-v1 | liuch37 | 2024-06-27T23:08:50Z | 551 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-26T21:05:48Z | ---
base_model: stabilityai/stable-diffusion-2-1-base
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-liuch37/controlnet-sd-2-1-base-v1
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
## Intended uses & limitations
#### How to use
```python
from PIL import Image
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = "liuch37/controlnet-sd-2-1-base-v1"
prompt = "YOUR_FAVORITE_PROMPT"
control_image = Image.open("YOUR_SEMANTIC_IMAGE")
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float32)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1-base", controlnet=controlnet, torch_dtype=torch.float32
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save("YOUR_OUTPUT_IMAGE")
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
Train the ControlNet with semantic maps as the condition. Cityscapes training set is used for training (https://huggingface.co/datasets/liuch37/controlnet-cityscapes). Only 2 epochs are trained for the current version. |
microsoft/DialogRPT-human-vs-machine | microsoft | 2021-05-23T09:16:47Z | 550 | 5 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-classification",
"arxiv:2009.06978",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | # Demo
Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing)
| Context | Response | `human_vs_machine` score |
| :------ | :------- | :------------: |
| I love NLP! | I'm not sure if it's a good idea. | 0.000 |
| I love NLP! | Me too! | 0.605 |
The `human_vs_machine` score predicts how likely the response is from a human rather than a machine.
# DialogRPT-human-vs-machine
### Dialog Ranking Pretrained Transformers
> How likely a dialog response is upvoted 👍 and/or gets replied 💬?
This is what [**DialogRPT**](https://github.com/golsun/DialogRPT) is learned to predict.
It is a set of dialog response ranking models proposed by [Microsoft Research NLP Group](https://www.microsoft.com/en-us/research/group/natural-language-processing/) trained on 100 + millions of human feedback data.
It can be used to improve existing dialog generation model (e.g., [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium)) by re-ranking the generated response candidates.
Quick Links:
* [EMNLP'20 Paper](https://arxiv.org/abs/2009.06978/)
* [Dataset, training, and evaluation](https://github.com/golsun/DialogRPT)
* [Colab Notebook Demo](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing)
We considered the following tasks and provided corresponding pretrained models.
|Task | Description | Pretrained model |
| :------------- | :----------- | :-----------: |
| **Human feedback** | **given a context and its two human responses, predict...**|
| `updown` | ... which gets more upvotes? | [model card](https://huggingface.co/microsoft/DialogRPT-updown) |
| `width`| ... which gets more direct replies? | [model card](https://huggingface.co/microsoft/DialogRPT-width) |
| `depth`| ... which gets longer follow-up thread? | [model card](https://huggingface.co/microsoft/DialogRPT-depth) |
| **Human-like** (human vs fake) | **given a context and one human response, distinguish it with...** |
| `human_vs_rand`| ... a random human response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-rand) |
| `human_vs_machine`| ... a machine generated response | this model |
### Contact:
Please create an issue on [our repo](https://github.com/golsun/DialogRPT)
### Citation:
```
@inproceedings{gao2020dialogrpt,
title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data},
author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan},
year={2020},
booktitle={EMNLP}
}
```
|
pritamdeka/S-Bluebert-snli-multinli-stsb | pritamdeka | 2022-09-21T18:58:03Z | 550 | 6 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# pritamdeka/S-Bluebert-snli-multinli-stsb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('pritamdeka/S-Bluebert-snli-multinli-stsb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('pritamdeka/S-Bluebert-snli-multinli-stsb')
model = AutoModel.from_pretrained('pritamdeka/S-Bluebert-snli-multinli-stsb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
If you use the model kindly cite the following work
```
@inproceedings{deka2021unsupervised,
title={Unsupervised Keyword Combination Query Generation from Online Health Related Content for Evidence-Based Fact Checking},
author={Deka, Pritam and Jurek-Loughrey, Anna},
booktitle={The 23rd International Conference on Information Integration and Web Intelligence},
pages={267--277},
year={2021}
}
``` |
jjzha/jobspanbert-base-cased | jjzha | 2023-03-16T07:04:47Z | 550 | 10 | transformers | [
"transformers",
"pytorch",
"bert",
"continuous pretraining",
"job postings",
"JobSpanBERT",
"en",
"endpoints_compatible",
"region:us"
]
| null | 2022-04-12T11:39:56Z | ---
language:
- en
tags:
- continuous pretraining
- job postings
- JobSpanBERT
---
# JobSpanBERT
This is the JobSpanBERT model from:
Mike Zhang, Kristian Nørgaard Jensen, Sif Dam Sonniks, and Barbara Plank. __SkillSpan: Hard and Soft Skill Extraction from Job Postings__. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
This model is continuously pre-trained from a spanbert-base-cased checkpoint (which can also be found in our repository) on ~3.2M sentences from job postings. More information can be found in the paper.
If you use this model, please cite the following paper:
```
@inproceedings{zhang-etal-2022-skillspan,
title = "{S}kill{S}pan: Hard and Soft Skill Extraction from {E}nglish Job Postings",
author = "Zhang, Mike and
Jensen, Kristian N{\o}rgaard and
Sonniks, Sif and
Plank, Barbara",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.366",
pages = "4962--4984",
abstract = "Skill Extraction (SE) is an important and widely-studied task useful to gain insights into labor market dynamics. However, there is a lacuna of datasets and annotation guidelines; available datasets are few and contain crowd-sourced labels on the span-level or labels from a predefined skill inventory. To address this gap, we introduce SKILLSPAN, a novel SE dataset consisting of 14.5K sentences and over 12.5K annotated spans. We release its respective guidelines created over three different sources annotated for hard and soft skills by domain experts. We introduce a BERT baseline (Devlin et al., 2019). To improve upon this baseline, we experiment with language models that are optimized for long spans (Joshi et al., 2020; Beltagy et al., 2020), continuous pre-training on the job posting domain (Han and Eisenstein, 2019; Gururangan et al., 2020), and multi-task learning (Caruana, 1997). Our results show that the domain-adapted models significantly outperform their non-adapted counterparts, and single-task outperforms multi-task learning.",
}
``` |
cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all | cardiffnlp | 2022-09-30T00:31:18Z | 550 | 7 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"dataset:cardiffnlp/tweet_topic_multi",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-29T17:01:29Z | ---
datasets:
- cardiffnlp/tweet_topic_multi
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_topic_multi
type: cardiffnlp/tweet_topic_multi
args: cardiffnlp/tweet_topic_multi
split: test_2021
metrics:
- name: F1
type: f1
value: 0.7647668393782383
- name: F1 (macro)
type: f1_macro
value: 0.6187022581213811
- name: Accuracy
type: accuracy
value: 0.5485407980941036
pipeline_tag: text-classification
widget:
- text: "I'm sure the {@Tampa Bay Lightning@} would’ve rather faced the Flyers but man does their experience versus the Blue Jackets this year and last help them a lot versus this Islanders team. Another meat grinder upcoming for the good guys"
example_title: "Example 1"
- text: "Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US."
example_title: "Example 2"
---
# cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the [tweet_topic_multi](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi). This model is fine-tuned on `train_all` split and validated on `test_2021` split of tweet_topic.
Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
- F1 (micro): 0.7647668393782383
- F1 (macro): 0.6187022581213811
- Accuracy: 0.5485407980941036
### Usage
```python
import math
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
def sigmoid(x):
return 1 / (1 + math.exp(-x))
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all")
model = AutoModelForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all", problem_type="multi_label_classification")
model.eval()
class_mapping = model.config.id2label
with torch.no_grad():
text = #NewVideo Cray Dollas- Water- Ft. Charlie Rose- (Official Music Video)- {{URL}} via {@YouTube@} #watchandlearn {{USERNAME}}
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens)
flags = [sigmoid(s) > 0.5 for s in output[0][0].detach().tolist()]
topic = [class_mapping[n] for n, i in enumerate(flags) if i]
print(topic)
```
### Reference
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
```
|
timm/vit_small_patch32_224.augreg_in21k | timm | 2023-05-06T00:29:21Z | 550 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-21k",
"arxiv:2106.10270",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
]
| image-classification | 2022-12-22T07:55:11Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-21k
---
# Model card for vit_small_patch32_224.augreg_in21k
A Vision Transformer (ViT) image classification model. Trained on ImageNet-21k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 30.9
- GMACs: 1.1
- Activations (M): 2.1
- Image size: 224 x 224
- **Papers:**
- How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-21k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_small_patch32_224.augreg_in21k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_small_patch32_224.augreg_in21k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 50, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{steiner2021augreg,
title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
journal={arXiv preprint arXiv:2106.10270},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
keremberke/yolov5s-forklift | keremberke | 2023-01-01T20:54:02Z | 550 | 1 | yolov5 | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/forklift-object-detection",
"model-index",
"region:us"
]
| object-detection | 2023-01-01T11:18:11Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/forklift-object-detection
model-index:
- name: keremberke/yolov5s-forklift
results:
- task:
type: object-detection
dataset:
type: keremberke/forklift-object-detection
name: keremberke/forklift-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.8382598267226307 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5s-forklift" src="https://huggingface.co/keremberke/yolov5s-forklift/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5s-forklift')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5s-forklift --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
|
UBC-NLP/AraT5v2-base-1024 | UBC-NLP | 2024-05-16T17:42:03Z | 550 | 19 | transformers | [
"transformers",
"pytorch",
"t5",
"Arabic T5",
"MSA",
"Twitter",
"Arabic Dialect",
"Arabic Machine Translation",
"Arabic Text Summarization",
"Arabic News Title and Question Generation",
"Arabic Paraphrasing and Transliteration",
"Arabic Code-Switched Translation",
"ar",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2023-08-15T22:35:15Z | ---
language:
- ar
tags:
- Arabic T5
- MSA
- Twitter
- Arabic Dialect
- Arabic Machine Translation
- Arabic Text Summarization
- Arabic News Title and Question Generation
- Arabic Paraphrasing and Transliteration
- Arabic Code-Switched Translation
---
# AraT5v2-base-1024
## What's new?
- **More Data.** `AraT5v2-base-1024` is trained on large and more diverse Arabic data.
- **Larger Sequence Length.** We increase the sequence length from 512 to 1024 in this version.
- **Faster Convergence.** On finetuning process, AraT5v2-base-1024 converges ~10x faster than the previous version (AraT5-base).
- **Extra IDs.** AraT5v2-base-1024 supports 100 sentinel tokens (a.k.a unique mask tokens).
<span style="color:red"><b>We recommend using AraT5v2-base-1024 instead of the previous version (AraT5-base).</b></span>
## An example of predicted masked token
```python
from transformers import T5Tokenizer, AutoModelForSeq2SeqLM
tokenizer = T5Tokenizer.from_pretrained("UBC-NLP/AraT5v2-base-1024")
model = AutoModelForSeq2SeqLM.from_pretrained("UBC-NLP/AraT5v2-base-1024")
ar_prompt="عاصمة ألمانيا هي <extra_id_0> "
input_ids = tokenizer(ar_prompt, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print("Tokenized input:", tokenizer.tokenize(ar_prompt))
print("Decoded output:", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Output:
```bash
Tokenized input: ['▁عاصمة', '▁ألمانيا', '▁هي', '<extra_id_0>']
Decoded output: برلين
```
# Citation
If you use our models for your scientific publication, or if you find the resources in this repository useful, please cite our papers as follows:
**(AraT5-base, AraT5-msa-base, AraT5-tweet-base, AraT5-msa-small, or AraT5-tweet-small)**
```bibtex
@inproceedings{nagoudi2022_arat5,
@inproceedings{nagoudi-etal-2022-arat5,
title = "{A}ra{T}5: Text-to-Text Transformers for {A}rabic Language Generation",
author = "Nagoudi, El Moatez Billah and
Elmadany, AbdelRahim and
Abdul-Mageed, Muhammad",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.47",
pages = "628--647",
abstract = "Transfer learning with a unified Transformer framework (T5) that converts all language problems into a text-to-text format was recently proposed as a simple and effective transfer learning approach. Although a multilingual version of the T5 model (mT5) was also introduced, it is not clear how well it can fare on non-English tasks involving diverse data. To investigate this question, we apply mT5 on a language with a wide variety of dialects{--}Arabic. For evaluation, we introduce a novel benchmark for ARabic language GENeration (ARGEN), covering seven important tasks. For model comparison, we pre-train three powerful Arabic T5-style models and evaluate them on ARGEN. Although pre-trained with {\textasciitilde}49 less data, our new models perform significantly better than mT5 on all ARGEN tasks (in 52 out of 59 test sets) and set several new SOTAs. Our models also establish new SOTA on the recently-proposed, large Arabic language understanding evaluation benchmark ARLUE (Abdul-Mageed et al., 2021). Our new models are publicly available. We also link to ARGEN datasets through our repository: https://github.com/UBC-NLP/araT5.",
}
```
**AraT5v2-base-1024**
``` bibtex
@inproceedings{elmadany-etal-2023-octopus,
title = "Octopus: A Multitask Model and Toolkit for {A}rabic Natural Language Generation",
author = "Elmadany, AbdelRahim and
Nagoudi, El Moatez Billah and
Abdul-Mageed, Muhammad",
booktitle = "Proceedings of ArabicNLP 2023",
month = dec,
year = "2023",
address = "Singapore (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.arabicnlp-1.20",
doi = "10.18653/v1/2023.arabicnlp-1.20",
pages = "232--243",
}
``` |
TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF | TheBloke | 2023-09-27T12:46:52Z | 550 | 8 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:jondurbin/airoboros-l2-70b-2.1-creative",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-08-31T12:37:07Z | ---
license: llama2
model_name: Airoboros L2 70B 2.1 Creative
base_model: jondurbin/airoboros-l2-70b-2.1-creative
inference: false
model_creator: Jon Durbin
model_type: llama
prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros L2 70B 2.1 Creative - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros L2 70B 2.1 Creative](https://huggingface.co/jondurbin/airoboros-l2-70b-2.1-creative)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jon Durbin's Airoboros L2 70B 2.1 Creative](https://huggingface.co/jondurbin/airoboros-l2-70b-2.1-creative).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-70b-2.1-creative)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Chat
```
A chat.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [airoboros-l2-70b-2.1-creative.Q2_K.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF/blob/main/airoboros-l2-70b-2.1-creative.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-l2-70b-2.1-creative.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF/blob/main/airoboros-l2-70b-2.1-creative.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [airoboros-l2-70b-2.1-creative.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF/blob/main/airoboros-l2-70b-2.1-creative.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [airoboros-l2-70b-2.1-creative.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF/blob/main/airoboros-l2-70b-2.1-creative.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [airoboros-l2-70b-2.1-creative.Q4_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF/blob/main/airoboros-l2-70b-2.1-creative.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-l2-70b-2.1-creative.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF/blob/main/airoboros-l2-70b-2.1-creative.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [airoboros-l2-70b-2.1-creative.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF/blob/main/airoboros-l2-70b-2.1-creative.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [airoboros-l2-70b-2.1-creative.Q5_0.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF/blob/main/airoboros-l2-70b-2.1-creative.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-l2-70b-2.1-creative.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF/blob/main/airoboros-l2-70b-2.1-creative.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [airoboros-l2-70b-2.1-creative.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF/blob/main/airoboros-l2-70b-2.1-creative.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| airoboros-l2-70b-2.1-creative.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| airoboros-l2-70b-2.1-creative.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `airoboros-l2-70b-2.1-creative.Q6_K.gguf-split-a`
* `airoboros-l2-70b-2.1-creative.Q6_K.gguf-split-b`
### q8_0
Please download:
* `airoboros-l2-70b-2.1-creative.Q8_0.gguf-split-a`
* `airoboros-l2-70b-2.1-creative.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat airoboros-l2-70b-2.1-creative.Q6_K.gguf-split-* > airoboros-l2-70b-2.1-creative.Q6_K.gguf && rm airoboros-l2-70b-2.1-creative.Q6_K.gguf-split-*
cat airoboros-l2-70b-2.1-creative.Q8_0.gguf-split-* > airoboros-l2-70b-2.1-creative.Q8_0.gguf && rm airoboros-l2-70b-2.1-creative.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B airoboros-l2-70b-2.1-creative.Q6_K.gguf-split-a + airoboros-l2-70b-2.1-creative.Q6_K.gguf-split-b airoboros-l2-70b-2.1-creative.Q6_K.gguf
del airoboros-l2-70b-2.1-creative.Q6_K.gguf-split-a airoboros-l2-70b-2.1-creative.Q6_K.gguf-split-b
COPY /B airoboros-l2-70b-2.1-creative.Q8_0.gguf-split-a + airoboros-l2-70b-2.1-creative.Q8_0.gguf-split-b airoboros-l2-70b-2.1-creative.Q8_0.gguf
del airoboros-l2-70b-2.1-creative.Q8_0.gguf-split-a airoboros-l2-70b-2.1-creative.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF and below it, a specific filename to download, such as: airoboros-l2-70b-2.1-creative.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF airoboros-l2-70b-2.1-creative.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF airoboros-l2-70b-2.1-creative.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m airoboros-l2-70b-2.1-creative.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat.\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Airoboros-L2-70B-2.1-Creative-GGUF", model_file="airoboros-l2-70b-2.1-creative.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Airoboros L2 70B 2.1 Creative
This is a merge of llama-2-70b with the "creative" adapter from https://hf.co/jondurbin/airoboros-lmoe-70b-2.1
Basically, it's using a subset of the airoboros 2.1 training that is specifically focused on creative tasks, such as writing, roleplay, etc.
<!-- original-model-card end -->
|
TheBloke/Chronohermes-Grad-L2-13B-GGUF | TheBloke | 2023-09-27T12:47:21Z | 550 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"llama-2",
"text-generation",
"en",
"base_model:Doctor-Shotgun/Chronohermes-Grad-L2-13b",
"license:other",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-05T04:14:50Z | ---
language:
- en
license: other
library_name: transformers
tags:
- llama
- llama-2
model_name: Chronohermes Grad L2 13B
base_model: Doctor-Shotgun/Chronohermes-Grad-L2-13b
inference: false
model_creator: Doctor Shotgun
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Chronohermes Grad L2 13B - GGUF
- Model creator: [Doctor Shotgun](https://huggingface.co/Doctor-Shotgun)
- Original model: [Chronohermes Grad L2 13B](https://huggingface.co/Doctor-Shotgun/Chronohermes-Grad-L2-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Doctor Shotgun's Chronohermes Grad L2 13B](https://huggingface.co/Doctor-Shotgun/Chronohermes-Grad-L2-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF)
* [Doctor Shotgun's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Doctor-Shotgun/Chronohermes-Grad-L2-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Doctor Shotgun's Chronohermes Grad L2 13B](https://huggingface.co/Doctor-Shotgun/Chronohermes-Grad-L2-13b).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [chronohermes-grad-l2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF/blob/main/chronohermes-grad-l2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [chronohermes-grad-l2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF/blob/main/chronohermes-grad-l2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [chronohermes-grad-l2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF/blob/main/chronohermes-grad-l2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [chronohermes-grad-l2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF/blob/main/chronohermes-grad-l2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [chronohermes-grad-l2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF/blob/main/chronohermes-grad-l2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [chronohermes-grad-l2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF/blob/main/chronohermes-grad-l2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [chronohermes-grad-l2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF/blob/main/chronohermes-grad-l2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [chronohermes-grad-l2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF/blob/main/chronohermes-grad-l2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [chronohermes-grad-l2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF/blob/main/chronohermes-grad-l2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [chronohermes-grad-l2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF/blob/main/chronohermes-grad-l2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [chronohermes-grad-l2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF/blob/main/chronohermes-grad-l2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [chronohermes-grad-l2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGUF/blob/main/chronohermes-grad-l2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Chronohermes-Grad-L2-13B-GGUF and below it, a specific filename to download, such as: chronohermes-grad-l2-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Chronohermes-Grad-L2-13B-GGUF chronohermes-grad-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Chronohermes-Grad-L2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Chronohermes-Grad-L2-13B-GGUF chronohermes-grad-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m chronohermes-grad-l2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Chronohermes-Grad-L2-13B-GGUF", model_file="chronohermes-grad-l2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Doctor Shotgun's Chronohermes Grad L2 13B
# Model Card: Chronohermes-Grad-L2-13b
This is a Llama 2-based model consisting of a gradient merge between:
- [Chronos 13b v2](https://huggingface.co/elinas/chronos-13b-v2)
- [Nous Hermes Llama2 13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
Quantized Models Provided by TheBloke (Thanks!):
- [GGML](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GGML)
- [GPTQ](https://huggingface.co/TheBloke/Chronohermes-Grad-L2-13B-GPTQ)
The merge was performed using [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) by Gryphe
The intended objective was to combine NH2's superior instruction following capabilities with the creativity and response length of Chronos v2. Merge ratios used are identical to those used in [Chronoboros Grad](https://huggingface.co/kingbri/chronoboros-grad-l2-13B), with NH2 starting with a weight of 0.9 at the 1st layer and phasing out by the 25th layer. The method is illustrated in the image below, with green representing NH2 and blue representing Chronos v2:

## Usage:
Intended to be prompted with the Alpaca instruction format of the base models:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
The model will show biases similar to those exhibited by the base models. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the base models for details.
<!-- original-model-card end -->
|
TheBloke/Stable-Platypus2-13B-GGUF | TheBloke | 2023-09-27T12:47:22Z | 550 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:garage-bAInd/Open-Platypus",
"arxiv:2308.07317",
"arxiv:2307.09288",
"base_model:garage-bAInd/Stable-Platypus2-13B",
"license:cc-by-nc-sa-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-09-05T05:04:25Z | ---
language:
- en
license: cc-by-nc-sa-4.0
datasets:
- garage-bAInd/Open-Platypus
model_name: Stable-Platypus2 13B
base_model: garage-bAInd/Stable-Platypus2-13B
inference: false
model_creator: garage-bAInd
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Stable-Platypus2 13B - GGUF
- Model creator: [garage-bAInd](https://huggingface.co/garage-bAInd)
- Original model: [Stable-Platypus2 13B](https://huggingface.co/garage-bAInd/Stable-Platypus2-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [garage-bAInd's Stable-Platypus2 13B](https://huggingface.co/garage-bAInd/Stable-Platypus2-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Stable-Platypus2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF)
* [garage-bAInd's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/garage-bAInd/Stable-Platypus2-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-sa-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [garage-bAInd's Stable-Platypus2 13B](https://huggingface.co/garage-bAInd/Stable-Platypus2-13B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [stable-platypus2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF/blob/main/stable-platypus2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [stable-platypus2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF/blob/main/stable-platypus2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [stable-platypus2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF/blob/main/stable-platypus2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [stable-platypus2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF/blob/main/stable-platypus2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [stable-platypus2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF/blob/main/stable-platypus2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [stable-platypus2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF/blob/main/stable-platypus2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [stable-platypus2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF/blob/main/stable-platypus2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [stable-platypus2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF/blob/main/stable-platypus2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [stable-platypus2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF/blob/main/stable-platypus2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [stable-platypus2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF/blob/main/stable-platypus2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [stable-platypus2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF/blob/main/stable-platypus2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [stable-platypus2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGUF/blob/main/stable-platypus2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Stable-Platypus2-13B-GGUF and below it, a specific filename to download, such as: stable-platypus2-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Stable-Platypus2-13B-GGUF stable-platypus2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Stable-Platypus2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Stable-Platypus2-13B-GGUF stable-platypus2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m stable-platypus2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Stable-Platypus2-13B-GGUF", model_file="stable-platypus2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: garage-bAInd's Stable-Platypus2 13B
# Stable-Platypus2-13B
Stable-Platypus-13B is a merge of [`garage-bAInd/Platypus2-13B`](https://huggingface.co/garage-bAInd/Platypus2-13B) and [`stabilityai/StableBeluga-13B`](https://huggingface.co/stabilityai/StableBeluga-13B).

### Benchmark Metrics
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 58.30 |
| ARC (25-shot) | 62.71 |
| HellaSwag (10-shot) | 82.29 |
| TruthfulQA (0-shot) | 52.52 |
| Avg. | 63.96 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
### Model Details
* **Trained by**: **Platypus2-13B** trained by Cole Hunter & Ariel Lee; **StableBeluga-13B** trained by StabilityAI
* **Model type:** **Stable-Platypus2-13B** is an auto-regressive language model based on the LLaMA 2 transformer architecture.
* **Language(s)**: English
* **License for Platypus2-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
* **License for StableBeluga-13B base weights**: See Notice.txt
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
### Training Dataset
`garage-bAInd/Platypus2-70B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
### Training Procedure
`garage-bAInd/Platypus2-13B` was instruction fine-tuned using LoRA on 1 A100 80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo.
### Reproducing Evaluation Results
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# change to repo directory
cd lm-evaluation-harness
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# install
pip install -e .
```
Each task was evaluated on a single A100 80GB GPU.
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Stable-Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Stable-Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Stable-Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Stable-Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Stable-Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Stable-Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Stable-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Stable-Platypus2-13B/truthfulqa_0shot.json --device cuda
```
### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
### Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
}
```
```bibtex
@inproceedings{
hu2022lora,
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=nZeVKeeFYf9}
}
```
<!-- original-model-card end -->
|
fgewfskjfsd/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GGUFv2 | fgewfskjfsd | 2023-10-13T14:42:31Z | 550 | 7 | null | [
"gguf",
"license:unknown",
"region:us"
]
| null | 2023-10-13T13:43:07Z | ---
license: unknown
---
Requantization of TheBloke's fp16 version of Wizard-Vicuna 13B with SuperHOT merged to GGUFv2.
Original link: https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16 |
TheBloke/Mistral-7B-SciPhi-32k-GGUF | TheBloke | 2023-10-22T18:02:24Z | 550 | 12 | transformers | [
"transformers",
"gguf",
"mistral",
"base_model:emrgnt-cmplxty/Mistral-7B-SciPhi-32k",
"license:llama2",
"text-generation-inference",
"region:us"
]
| null | 2023-10-22T17:56:41Z | ---
base_model: emrgnt-cmplxty/Mistral-7B-SciPhi-32k
inference: false
license: llama2
model_creator: Owen Colegrove
model_name: Mistral 7B SciPhi 32K
model_type: mistral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mistral 7B SciPhi 32K - GGUF
- Model creator: [Owen Colegrove](https://huggingface.co/emrgnt-cmplxty)
- Original model: [Mistral 7B SciPhi 32K](https://huggingface.co/emrgnt-cmplxty/Mistral-7B-SciPhi-32k)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Owen Colegrove's Mistral 7B SciPhi 32K](https://huggingface.co/emrgnt-cmplxty/Mistral-7B-SciPhi-32k).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF)
* [Owen Colegrove's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/emrgnt-cmplxty/Mistral-7B-SciPhi-32k)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mistral-7b-sciphi-32k.Q2_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF/blob/main/mistral-7b-sciphi-32k.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral-7b-sciphi-32k.Q3_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF/blob/main/mistral-7b-sciphi-32k.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [mistral-7b-sciphi-32k.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF/blob/main/mistral-7b-sciphi-32k.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [mistral-7b-sciphi-32k.Q3_K_L.gguf](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF/blob/main/mistral-7b-sciphi-32k.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [mistral-7b-sciphi-32k.Q4_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF/blob/main/mistral-7b-sciphi-32k.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral-7b-sciphi-32k.Q4_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF/blob/main/mistral-7b-sciphi-32k.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [mistral-7b-sciphi-32k.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF/blob/main/mistral-7b-sciphi-32k.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [mistral-7b-sciphi-32k.Q5_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF/blob/main/mistral-7b-sciphi-32k.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral-7b-sciphi-32k.Q5_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF/blob/main/mistral-7b-sciphi-32k.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [mistral-7b-sciphi-32k.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF/blob/main/mistral-7b-sciphi-32k.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [mistral-7b-sciphi-32k.Q6_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF/blob/main/mistral-7b-sciphi-32k.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [mistral-7b-sciphi-32k.Q8_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-SciPhi-32k-GGUF/blob/main/mistral-7b-sciphi-32k.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Mistral-7B-SciPhi-32k-GGUF and below it, a specific filename to download, such as: mistral-7b-sciphi-32k.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Mistral-7B-SciPhi-32k-GGUF mistral-7b-sciphi-32k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Mistral-7B-SciPhi-32k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral-7B-SciPhi-32k-GGUF mistral-7b-sciphi-32k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mistral-7b-sciphi-32k.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Mistral-7B-SciPhi-32k-GGUF", model_file="mistral-7b-sciphi-32k.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Owen Colegrove's Mistral 7B SciPhi 32K
Training is currently still underway, but this is the first epoch of a 32k context fine-tuning run of Mistral-7b over the following datasets:
- emrgnt-cmplxty/sciphi-textbooks-are-all-you-need
- open-phi/rag-textbook-instruct-full
- open-phi/programming_books_llama
- open-phi/textbooks
- Open-Orca/SlimOrca
- WizardLM/WizardLM_evol_instruct_70k
<!-- original-model-card end -->
|
TheBloke/mistral-7B-dpo-v5-GGUF | TheBloke | 2023-12-15T13:34:40Z | 550 | 3 | transformers | [
"transformers",
"gguf",
"mistral",
"en",
"base_model:mncai/mistral-7b-dpo-v5",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-12-15T13:30:34Z | ---
base_model: mncai/mistral-7b-dpo-v5
inference: false
language:
- en
license: apache-2.0
model_creator: MindsAndCompany
model_name: Mistral 7B DPO V5
model_type: mistral
prompt_template: '<|user|>
{prompt}
<|assistant|>
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mistral 7B DPO V5 - GGUF
- Model creator: [MindsAndCompany](https://huggingface.co/mncai)
- Original model: [Mistral 7B DPO V5](https://huggingface.co/mncai/mistral-7b-dpo-v5)
<!-- description start -->
## Description
This repo contains GGUF format model files for [MindsAndCompany's Mistral 7B DPO V5](https://huggingface.co/mncai/mistral-7b-dpo-v5).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF)
* [MindsAndCompany's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mncai/mistral-7b-dpo-v5)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ToRA
```
<|user|>
{prompt}
<|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mistral-7b-dpo-v5.Q2_K.gguf](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF/blob/main/mistral-7b-dpo-v5.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral-7b-dpo-v5.Q3_K_S.gguf](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF/blob/main/mistral-7b-dpo-v5.Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| 5.67 GB | very small, high quality loss |
| [mistral-7b-dpo-v5.Q3_K_M.gguf](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF/blob/main/mistral-7b-dpo-v5.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [mistral-7b-dpo-v5.Q3_K_L.gguf](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF/blob/main/mistral-7b-dpo-v5.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [mistral-7b-dpo-v5.Q4_0.gguf](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF/blob/main/mistral-7b-dpo-v5.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral-7b-dpo-v5.Q4_K_S.gguf](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF/blob/main/mistral-7b-dpo-v5.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [mistral-7b-dpo-v5.Q4_K_M.gguf](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF/blob/main/mistral-7b-dpo-v5.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [mistral-7b-dpo-v5.Q5_0.gguf](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF/blob/main/mistral-7b-dpo-v5.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral-7b-dpo-v5.Q5_K_S.gguf](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF/blob/main/mistral-7b-dpo-v5.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [mistral-7b-dpo-v5.Q5_K_M.gguf](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF/blob/main/mistral-7b-dpo-v5.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [mistral-7b-dpo-v5.Q6_K.gguf](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF/blob/main/mistral-7b-dpo-v5.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [mistral-7b-dpo-v5.Q8_0.gguf](https://huggingface.co/TheBloke/mistral-7B-dpo-v5-GGUF/blob/main/mistral-7b-dpo-v5.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/mistral-7B-dpo-v5-GGUF and below it, a specific filename to download, such as: mistral-7b-dpo-v5.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/mistral-7B-dpo-v5-GGUF mistral-7b-dpo-v5.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/mistral-7B-dpo-v5-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/mistral-7B-dpo-v5-GGUF mistral-7b-dpo-v5.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m mistral-7b-dpo-v5.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./mistral-7b-dpo-v5.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|user|>\n{prompt}\n<|assistant|>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./mistral-7b-dpo-v5.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: MindsAndCompany's Mistral 7B DPO V5
# Model Card for mncai/mistral-7b-dpo-v5
### Introduction of MindsAndCompany
https://mnc.ai/
We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).
### Model Summary
based mistral, instruction tuned and dpo.
### How to Use
Here give some examples of how to use our model.
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
import transformers
import torch
hf_model = 'mncai/mistral-7b-dpo-v5'
message = "<|user|>\n두 개의 구가 있는데 각각 지름이 1, 2일때 각 구의 부피는 몇배야? 설명도 같이 해줘.\n<|assistant|>\n"
sequences = pipeline(
message,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=2048,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Contact
If you have any questions, please raise an issue or contact us at [email protected]
<!-- original-model-card end -->
|
TheBloke/docsgpt-7B-mistral-GGUF | TheBloke | 2023-12-29T15:05:44Z | 550 | 9 | transformers | [
"transformers",
"gguf",
"mistral",
"rag",
"closed-qa",
"context",
"base_model:Arc53/docsgpt-7b-mistral",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-12-29T14:41:30Z | ---
base_model: Arc53/docsgpt-7b-mistral
inference: false
license: apache-2.0
model_creator: Arc53
model_name: DocsGPT 7B Mistral
model_type: mistral
prompt_template: '### Instruction
{prompt}
### Context
{{context}}
### Answer
'
quantized_by: TheBloke
tags:
- rag
- closed-qa
- context
- mistral
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# DocsGPT 7B Mistral - GGUF
- Model creator: [Arc53](https://huggingface.co/Arc53)
- Original model: [DocsGPT 7B Mistral](https://huggingface.co/Arc53/docsgpt-7b-mistral)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Arc53's DocsGPT 7B Mistral](https://huggingface.co/Arc53/docsgpt-7b-mistral).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/docsgpt-7B-mistral-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF)
* [Arc53's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Arc53/docsgpt-7b-mistral)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: DocsGPT
```
### Instruction
{prompt}
### Context
{{context}}
### Answer
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [docsgpt-7b-mistral.Q2_K.gguf](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF/blob/main/docsgpt-7b-mistral.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [docsgpt-7b-mistral.Q3_K_S.gguf](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF/blob/main/docsgpt-7b-mistral.Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| 5.67 GB | very small, high quality loss |
| [docsgpt-7b-mistral.Q3_K_M.gguf](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF/blob/main/docsgpt-7b-mistral.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [docsgpt-7b-mistral.Q3_K_L.gguf](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF/blob/main/docsgpt-7b-mistral.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [docsgpt-7b-mistral.Q4_0.gguf](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF/blob/main/docsgpt-7b-mistral.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [docsgpt-7b-mistral.Q4_K_S.gguf](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF/blob/main/docsgpt-7b-mistral.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [docsgpt-7b-mistral.Q4_K_M.gguf](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF/blob/main/docsgpt-7b-mistral.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [docsgpt-7b-mistral.Q5_0.gguf](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF/blob/main/docsgpt-7b-mistral.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [docsgpt-7b-mistral.Q5_K_S.gguf](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF/blob/main/docsgpt-7b-mistral.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [docsgpt-7b-mistral.Q5_K_M.gguf](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF/blob/main/docsgpt-7b-mistral.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [docsgpt-7b-mistral.Q6_K.gguf](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF/blob/main/docsgpt-7b-mistral.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [docsgpt-7b-mistral.Q8_0.gguf](https://huggingface.co/TheBloke/docsgpt-7B-mistral-GGUF/blob/main/docsgpt-7b-mistral.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/docsgpt-7B-mistral-GGUF and below it, a specific filename to download, such as: docsgpt-7b-mistral.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/docsgpt-7B-mistral-GGUF docsgpt-7b-mistral.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/docsgpt-7B-mistral-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/docsgpt-7B-mistral-GGUF docsgpt-7b-mistral.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m docsgpt-7b-mistral.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction\n{prompt}\n### Context\n{{context}}\n### Answer"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./docsgpt-7b-mistral.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"### Instruction\n{prompt}\n### Context\n{{context}}\n### Answer", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./docsgpt-7b-mistral.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Arc53's DocsGPT 7B Mistral
DocsGPT is optimized for Documentation (RAG optimised): Specifically fine-tuned for providing answers that are based on context, making it particularly useful for developers and technical support teams.
We used the Lora fine tuning process.
This model is fine tuned on top of zephyr-7b-beta
It's an apache-2.0 license so you can use it for commercial purposes too.
Benchmarks:
Bacon:
The BACON test is an internal assessment designed to evaluate the capabilities of neural networks in handling questions with substantial content. It focuses on testing the model's understanding of context-driven queries, as well as its tendency for hallucination and attention span. The questions in both parts are carefully crafted, drawing from diverse sources such as scientific papers, complex code problems, and instructional prompts, providing a comprehensive test of the model's ability to process and generate information in various domains.
| Model | Score |
|------------------------------|-------|
| gpt-4 | 8.74 |
| DocsGPT-7b-Mistral | 8.64 |
| gpt-3.5-turbo | 8.42 |
| zephyr-7b-beta | 8.37 |
| neural-chat-7b-v3-1 | 7.88 |
| Mistral-7B-Instruct-v0.1 | 7.44 |
| openinstruct-mistral-7b | 5.86 |
| llama-2-13b | 2.29 |


MTbench with llm judge:

########## First turn ##########
| Model | Turn | Score |
|-----------------------|------|----------|
| gpt-4 | 1 | 8.956250 |
| gpt-3.5-turbo | 1 | 8.075000 |
| DocsGPT-7b-Mistral | 1 | 7.593750 |
| zephyr-7b-beta | 1 | 7.412500 |
| vicuna-13b-v1.3 | 1 | 6.812500 |
| alpaca-13b | 1 | 4.975000 |
| deepseek-coder-6.7b | 1 | 4.506329 |
########## Second turn ##########
| Model | Turn | Score |
|-----------------------|------|----------|
| gpt-4 | 2 | 9.025000 |
| gpt-3.5-turbo | 2 | 7.812500 |
| DocsGPT-7b-Mistral | 2 | 6.740000 |
| zephyr-7b-beta | 2 | 6.650000 |
| vicuna-13b-v1.3 | 2 | 5.962500 |
| deepseek-coder-6.7b | 2 | 5.025641 |
| alpaca-13b | 2 | 4.087500 |
########## Average ##########
| Model | Score |
|-----------------------|----------|
| gpt-4 | 8.990625 |
| gpt-3.5-turbo | 7.943750 |
| DocsGPT-7b-Mistral | 7.166875 |
| zephyr-7b-beta | 7.031250 |
| vicuna-13b-v1.3 | 6.387500 |
| deepseek-coder-6.7b | 4.764331 |
| alpaca-13b | 4.531250 |
To prepare your prompts make sure you keep this format:
```
### Instruction
(where the question goes)
### Context
(your document retrieval + system instructions)
### Answer
```
<!-- original-model-card end -->
|
qwp4w3hyb/zephyr-orpo-141b-A35b-v0.1-iMat-GGUF | qwp4w3hyb | 2024-04-17T14:47:14Z | 550 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-16T08:33:26Z | ---
license: apache-2.0
---
|
MadMarx37/deepseek-coder-1.3b-python-peft | MadMarx37 | 2024-04-29T18:32:03Z | 550 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"en",
"dataset:ArtifactAI/arxiv_python_research_code",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-29T17:55:13Z | ---
library_name: transformers
tags:
- code
license: mit
datasets:
- ArtifactAI/arxiv_python_research_code
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
A parameter-efficient finetune (using LoRA) of DeepSeek Coder 1.3B finetuned on Python code.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
A finetune of DeepSeek Coder 1.3B finetuned on 1000 examples of Python code from the ArtifactAI/arxiv_python_research_code dataset.
- **Model type:** Text Generation
- **Language(s) (NLP):** English, Python
- **Finetuned from model:** deepseek-ai/deepseek-coder-1.3b-base
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/kevin-v96/python-codecomplete-lm
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
To generate Python code
## How to Get Started with the Model
Use the code below to get started with the model.
```
from transformers import pipeline
model_name = "MadMarx37/deepseek-coder-1.3b-python-peft"
def generate_output(input):
# Run text generation pipeline with our next model
pipe = pipeline(task="text-generation", model=model_name, tokenizer=model_name, max_length=max_length)
result = pipe(input)
print(result[0]['generated_text'])
```
## Training Details
#### Training Hyperparameters
- Training regime: fp16 mixed-precision with original model loaded in 4bits with bitsandbytes <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
- learning_rate = 2e-3
- lr_scheduler_type = 'cosine_with_restarts'
- max_grad_norm = 0.001
- weight_decay = 0.001
- num_train_epochs = 15
- eval_strategy = "steps"
- eval_steps = 25
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
1.3B parameters. Training time of ~2 hours on an RTX3080.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
https://huggingface.co/datasets/ArtifactAI/arxiv_python_research_code
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Standard training and eval loss from the HF SFTTrainer.
### Results
Training Loss: 0.074100
Validation Loss: 0.022271
#### Summary
The training had some unstability in the gradient norms, but the overall trend in both training and validation loss
were downward, and validation loss has almost plateaud, which is ideally where we want our model. The code generation on the same
prompts that we tested the original model on also seem better with the finetuned model. A good way to make the model better, if
we wanted to increase the finetuning data, would be to also increase the epochs.
The training run metrics can be seen here:
https://wandb.ai/kevinv3796/python-autocomplete-deepseek/reports/Supervised-Finetuning-run-for-DeepSeek-Coder-1-3B-on-Python-Code--Vmlldzo3NzQ4NjY0?accessToken=bo0rlzp0yj9vxf1xe3fybfv6rbgl97w5kkab478t8f5unbwltdczy63ba9o9kwjp
|
adamdad/videocrafterv2_diffusers | adamdad | 2024-06-24T09:12:02Z | 550 | 2 | diffusers | [
"diffusers",
"safetensors",
"region:us"
]
| null | 2024-05-03T12:42:39Z | Entry not found |
RUNorm/RUNorm-normalizer-big | RUNorm | 2024-05-14T00:56:18Z | 550 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2024-05-05T10:16:44Z | ---
license: apache-2.0
language:
- ru
inference: false
---
Используется в https://github.com/Den4ikAI/runorm
Вы можете поддержать проект деньгами. Это поможет быстрее разрабатывать более качественные новые версии.
CloudTips: https://pay.cloudtips.ru/p/b9d86686 |
BAAI/Bunny-v1_1-Llama-3-8B-V | BAAI | 2024-06-24T14:32:30Z | 550 | 25 | transformers | [
"transformers",
"safetensors",
"bunny-llama",
"text-generation",
"conversational",
"custom_code",
"arxiv:2402.11530",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-05-31T10:25:35Z | ---
inference: false
license: apache-2.0
---
# Model Card
<p align="center">
<img src="./icon.png" alt="Logo" width="350">
</p>
📖 [Technical report](https://arxiv.org/abs/2402.11530) | 🏠 [Code](https://github.com/BAAI-DCAI/Bunny) | 🐰 [Demo](http://bunny.baai.ac.cn)
This is Bunny-v1.1-Llama-3-8B-V.
Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Llama-3-8B, Phi-1.5, StableLM-2, Qwen1.5, MiniCPM and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source.
We provide Bunny-v1.1-Llama-3-8B-V, which is built upon [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) and [Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) with [S \\(^{2}\\)-Wrapper](https://github.com/bfshi/scaling_on_scales), supporting 1152x1152 resolution. More details about this model can be found in [GitHub](https://github.com/BAAI-DCAI/Bunny).

# Quickstart
Here we show a code snippet to show you how to use the model with transformers.
Before running the snippet, you need to install the following dependencies:
```shell
pip install torch transformers accelerate pillow
```
If the CUDA memory is enough, it would be faster to execute this snippet by setting `CUDA_VISIBLE_DEVICES=0`.
Users especially those in Chinese mainland may want to refer to a HuggingFace [mirror site](https://hf-mirror.com).
```python
import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
import warnings
# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings('ignore')
# set device
device = 'cuda' # or cpu
torch.set_default_device(device)
# create model
model = AutoModelForCausalLM.from_pretrained(
'BAAI/Bunny-v1_1-Llama-3-8B-V',
torch_dtype=torch.float16, # float32 for cpu
device_map='auto',
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
'BAAI/Bunny-v1_1-Llama-3-8B-V',
trust_remote_code=True)
# text prompt
prompt = 'Why is the image funny?'
text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:"
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1][1:], dtype=torch.long).unsqueeze(0).to(device)
# image, sample images can be found in images folder
image = Image.open('example_2.png')
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype, device=device)
# generate
output_ids = model.generate(
input_ids,
images=image_tensor,
max_new_tokens=100,
use_cache=True,
repetition_penalty=1.0 # increase this to avoid chattering
)[0]
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
```
|
hyokwan/hkcode-solar-youtube-merged | hyokwan | 2024-06-26T10:18:58Z | 550 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"hkcode",
"hyokwan",
"llama2",
"solar",
"merge",
"merged",
"moe",
"conversational",
"ko",
"dataset:hyokwan/llama3data_hkcode",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-06T05:49:15Z | ---
license: mit
datasets:
- hyokwan/llama3data_hkcode
language:
- ko
library_name: transformers
pipeline_tag: text-generation
tags:
- hkcode
- hyokwan
- llama2
- solar
- merge
- merged
- moe
---
Model Details hkcode-solar-youtube-merged
hkcode-solar-youtube-merged model is continued pretrained language model based on SOLAR-10.7B
This model is trained with specific department of university (Korea Plytechnics Fintech) .
License: https://llama.meta.com/llama3/license
Intended Use TBD
How to use TBD
Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.
Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at https://llama.meta.com/llama3/use-policy/.
Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at http://llama.meta.com/responsible-use-guide
---
license: mit
datasets:
- hyokwan/llama3data_hkcode
language:
- ko
library_name: transformers
pipeline_tag: text-generation
tags:
- hkcode
- llama2
- solar
- hyokwan
--- |
kuleshov-group/mdlm-owt | kuleshov-group | 2024-06-18T19:24:00Z | 550 | 6 | transformers | [
"transformers",
"safetensors",
"mdlm",
"fill-mask",
"custom_code",
"en",
"dataset:Skylion007/openwebtext",
"arxiv:2406.07524",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| fill-mask | 2024-06-06T18:47:47Z | ---
library_name: transformers
license: apache-2.0
language:
- en
datasets:
- Skylion007/openwebtext
metrics:
- perplexity
---
## Using MDLM
To use the pre-trained model for masked language modeling, use the following snippet:
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
# See the `MDLM` collection page on the hub for list of available models.
tokenizer = transformers.AutoTokenizer.from_pretrained('gpt2')
model_name = 'kuleshov-group/mdlm-owt'
model = AutoModelForMaskedLM.from_pretrained(model_name)
```
For more details, please see our github repository: [MDLM](https://github.com/kuleshov-group/mdlm)
## Model Details
The model, which has a context length of `1024` and is similar in size to GPT2-medium with approximately `130 million` non-embedding parameters,
was trained using a forward diffusion process that generates inputs varying from fully masked to fully unmasked. Its objective is to
reconstruct the original input from these varying levels of masking, outputting logits in the process.
The training regimen comprised one million steps on the OpenWebText corpus, involving the processing of a total of `33 billion` tokens.
For more details, please see our paper: [Simple and Effective Masked Diffusion Language Models](http://arxiv.org/abs/2406.07524).
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
Please cite our work using the bibtex below:
**BibTeX:**
```
@misc{sahoo2024simple,
title={Simple and Effective Masked Diffusion Language Models},
author={Subham Sekhar Sahoo and Marianne Arriola and Yair Schiff and Aaron Gokaslan and Edgar Marroquin and Justin T Chiu and Alexander Rush and Volodymyr Kuleshov},
year={2024},
eprint={2406.07524},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**APA:**
```
@software{Sahoo_Simple_and_Effective_2024,
author = {Sahoo, Subham Sekhar and Arriola, Marianne and Schiff, Yair and Gokaslan, Aaron and Marroquin, Edgar and Chiu, Justin T and Rush, Alexander and Kuleshov, Volodymyr},
doi = {10.48550/arXiv.2406.07524},
month = jun,
title = {{Simple and Effective Masked Diffusion Language Models}},
version = {arXiv:2406.07524v1},
year = {2024}
}
```
## Model Card Contact
Subham Sekhar Sahoo ([email protected]) |
Replete-AI/Replete-Coder-Qwen2-1.5b | Replete-AI | 2024-06-24T23:09:49Z | 550 | 20 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"dataset:Replete-AI/code_bagel_hermes-2.5",
"dataset:Replete-AI/code_bagel",
"dataset:Replete-AI/OpenHermes-2.5-Uncensored",
"dataset:teknium/OpenHermes-2.5",
"dataset:layoric/tiny-codes-alpaca",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:ajibawa-2023/Code-290k-ShareGPT",
"dataset:TIGER-Lab/MathInstruct",
"dataset:chargoddard/commitpack-ft-instruct-rated",
"dataset:iamturun/code_instructions_120k_alpaca",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:nickrosh/Evol-Instruct-Code-80k-v1",
"dataset:coseal/CodeUltraFeedback_binarized",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:CyberNative/Code_Vulnerability_Security_DPO",
"dataset:jondurbin/airoboros-2.2",
"dataset:camel-ai",
"dataset:lmsys/lmsys-chat-1m",
"dataset:CollectiveCognition/chats-data-2023-09-22",
"dataset:CoT-Alpaca-GPT4",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:teknium/GPT4-LLM-Cleaned",
"dataset:GPTeacher",
"dataset:OpenGPT",
"dataset:meta-math/MetaMathQA",
"dataset:Open-Orca/SlimOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:Unnatural-Instructions-GPT4",
"base_model:Qwen/Qwen2-1.5B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-23T03:30:08Z | ---
license: apache-2.0
base_model: Qwen/Qwen2-1.5B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
datasets:
- Replete-AI/code_bagel_hermes-2.5
- Replete-AI/code_bagel
- Replete-AI/OpenHermes-2.5-Uncensored
- teknium/OpenHermes-2.5
- layoric/tiny-codes-alpaca
- glaiveai/glaive-code-assistant-v3
- ajibawa-2023/Code-290k-ShareGPT
- TIGER-Lab/MathInstruct
- chargoddard/commitpack-ft-instruct-rated
- iamturun/code_instructions_120k_alpaca
- ise-uiuc/Magicoder-Evol-Instruct-110K
- cognitivecomputations/dolphin-coder
- nickrosh/Evol-Instruct-Code-80k-v1
- coseal/CodeUltraFeedback_binarized
- glaiveai/glaive-function-calling-v2
- CyberNative/Code_Vulnerability_Security_DPO
- jondurbin/airoboros-2.2
- camel-ai
- lmsys/lmsys-chat-1m
- CollectiveCognition/chats-data-2023-09-22
- CoT-Alpaca-GPT4
- WizardLM/WizardLM_evol_instruct_70k
- WizardLM/WizardLM_evol_instruct_V2_196k
- teknium/GPT4-LLM-Cleaned
- GPTeacher
- OpenGPT
- meta-math/MetaMathQA
- Open-Orca/SlimOrca
- garage-bAInd/Open-Platypus
- anon8231489123/ShareGPT_Vicuna_unfiltered
- Unnatural-Instructions-GPT4
model-index:
- name: Replete-Coder-llama3-8b
results:
- task:
name: HumanEval
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.35365853658536583
verified: True
- task:
name: AI2 Reasoning Challenge
type: text-generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: accuracy
value:
name: normalized accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
name: Text Generation
type: text-generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: accuracy
value:
name: normalized accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
name: Text Generation
type: text-generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
value:
name: accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
name: Text Generation
type: text-generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: multiple_choice_accuracy
value:
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
name: Text Generation
type: text-generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: accuracy
value:
name: accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
name: Text Generation
type: text-generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
value:
name: accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
---
# Replete-Coder-Qwen2-1.5b
Finetuned by: Rombodawg
### More than just a coding model!
Although Replete-Coder has amazing coding capabilities, its trained on vaste amount of non-coding data, fully cleaned and uncensored. Dont just use it for coding, use it for all your needs! We are truly trying to make the GPT killer!

Thank you to TensorDock for sponsoring Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b
you can check out their website for cloud compute rental below.
- https://tensordock.com
__________________________________________________________________________________________________
Replete-Coder-Qwen2-1.5b is a general purpose model that is specially trained in coding in over 100 coding languages. The data used to train the model contains 25% non-code instruction data and 75% coding instruction data totaling up to 3.9 million lines, roughly 1 billion tokens, or 7.27gb of instruct data. The data used to train this model was 100% uncensored, then fully deduplicated, before training happened.
The Replete-Coder models (including Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b) feature the following:
- Advanced coding capabilities in over 100 coding languages
- Advanced code translation (between languages)
- Security and vulnerability prevention related coding capabilities
- General purpose use
- Uncensored use
- Function calling
- Advanced math use
- Use on low end (8b) and mobile (1.5b) platforms
Notice: Replete-Coder series of models are fine-tuned on a context window of 8192 tokens. Performance past this context window is not guaranteed.

_________________________________________________________________________________________________
You can find the 25% non-coding instruction below:
- https://huggingface.co/datasets/Replete-AI/OpenHermes-2.5-Uncensored
And the 75% coding specific instruction data below:
- https://huggingface.co/datasets/Replete-AI/code_bagel
These two datasets were combined to create the final dataset for training, which is linked below:
- https://huggingface.co/datasets/Replete-AI/code_bagel_hermes-2.5
__________________________________________________________________________________________________
## Prompt Template: ChatML
```
<|im_start|>system
{}<|im_end|>
<|im_start|>user
{}<|im_end|>
<|im_start|>assistant
{}
```
Note: The system prompt varies in training data, but the most commonly used one is:
```
Below is an instruction that describes a task, Write a response that appropriately completes the request.
```
End token:
```
<|endoftext|>
```
__________________________________________________________________________________________________
Thank you to the community for your contributions to the Replete-AI/code_bagel_hermes-2.5 dataset. Without the participation of so many members making their datasets free and open source for any to use, this amazing AI model wouldn't be possible.
Extra special thanks to Teknium for the Open-Hermes-2.5 dataset and jondurbin for the bagel dataset and the naming idea for the code_bagel series of datasets. You can find both of their huggingface accounts linked below:
- https://huggingface.co/teknium
- https://huggingface.co/jondurbin
Another special thanks to unsloth for being the main method of training for Replete-Coder. Bellow you can find their github, as well as the special Replete-Ai secret sause (Unsloth + Qlora + Galore) colab code document that was used to train this model.
- https://github.com/unslothai/unsloth
- https://colab.research.google.com/drive/1eXGqy5M--0yW4u0uRnmNgBka-tDk2Li0?usp=sharing
__________________________________________________________________________________________________
## Join the Replete-Ai discord! We are a great and Loving community!
- https://discord.gg/ZZbnsmVnjD
|
keremberke/yolov5n-nfl | keremberke | 2022-12-30T20:47:25Z | 549 | 2 | yolov5 | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/nfl-object-detection",
"model-index",
"region:us"
]
| object-detection | 2022-12-30T11:28:56Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/nfl-object-detection
model-index:
- name: keremberke/yolov5n-nfl
results:
- task:
type: object-detection
dataset:
type: keremberke/nfl-object-detection
name: keremberke/nfl-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.2171148618855661 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5n-nfl" src="https://huggingface.co/keremberke/yolov5n-nfl/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5n-nfl')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5n-nfl --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
|
kaiyuy/leandojo-lean3-tacgen-byt5-small | kaiyuy | 2024-01-17T20:32:49Z | 549 | 1 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2023-06-17T04:39:19Z | ---
license: mit
inference:
parameters:
max_length: 1024
widget:
- text: "a b : ℕ\n⊢ a + b = b + a"
example_title: "Example"
---
[LeanDojo: Theorem Proving with Retrieval-Augmented Language Models](https://arxiv.org/abs/xxxx.xxxxx)
Under review, NeurIPS (Datasets and Benchmarks Track), 2023
[Kaiyu Yang](https://yangky11.github.io/), [Aidan Swope](https://aidanswope.com/about), [Alex Gu](https://minimario.github.io/), [Rahul Chalamala](https://www.linkedin.com/in/rchalamala),
[Peiyang Song](https://www.linkedin.com/in/peiyang-song-3279b3251/), [Shixing Yu](https://billysx.github.io/), [Saad Godil](https://www.linkedin.com/in/saad-godil-9728353/), [Ryan Prenger](https://www.linkedin.com/in/ryan-prenger-18797ba1/), [Anima Anandkumar](http://tensorlab.cms.caltech.edu/users/anima/)
```bibtex
@article{yang2023leandojo,
title={{LeanDojo}: Theorem Proving with Retrieval-Augmented Language Models},
author={Yang, Kaiyu and Swope, Aidan and Gu, Alex and Chalamala, Rahul and Song, Peiyang and Yu, Shixing and Godil, Saad and Prenger, Ryan and Anandkumar, Anima},
journal={arXiv preprint arXiv:xxxx.xxxxx},
year={2023}
}
```
Please visit [LeanDojo Website](https://leandojo.org/) for details. |
timm/efficientvit_b2.r288_in1k | timm | 2023-11-21T21:40:08Z | 549 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2205.14756",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-08-18T22:46:14Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for efficientvit_b2.r288_in1k
An EfficientViT (MIT) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 24.3
- GMACs: 2.6
- Activations (M): 24.0
- Image size: 288 x 288
- **Papers:**
- EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction: https://arxiv.org/abs/2205.14756
- **Original:** https://github.com/mit-han-lab/efficientvit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('efficientvit_b2.r288_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_b2.r288_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 48, 72, 72])
# torch.Size([1, 96, 36, 36])
# torch.Size([1, 192, 18, 18])
# torch.Size([1, 384, 9, 9])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_b2.r288_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 384, 9, 9) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{cai2022efficientvit,
title={EfficientViT: Enhanced linear attention for high-resolution low-computation visual recognition},
author={Cai, Han and Gan, Chuang and Han, Song},
journal={arXiv preprint arXiv:2205.14756},
year={2022}
}
```
|
shaowenchen/chinese-llama-2-7b-gguf | shaowenchen | 2023-09-08T12:21:46Z | 549 | 0 | null | [
"gguf",
"meta",
"llama",
"llama-2",
"chinese",
"text-generation",
"zh",
"license:other",
"region:us"
]
| text-generation | 2023-09-07T10:15:27Z | ---
inference: false
language:
- zh
license: other
model_creator: ziqingyang
model_link: https://huggingface.co/ziqingyang/chinese-llama-2-7b
model_name: chinese-llama-2-7b
model_type: llama
pipeline_tag: text-generation
quantized_by: shaowenchen
tasks:
- text2text-generation
tags:
- meta
- gguf
- llama
- llama-2
- chinese
---
## Provided files
| Name | Quant method | Size |
| ------------------------------ | ------------ | ------ |
| chinese-llama-2-7b.Q2_K.gguf | Q2_K | 2.7 GB |
| chinese-llama-2-7b.Q3_K.gguf | Q3_K | 3.2 GB |
| chinese-llama-2-7b.Q3_K_L.gguf | Q3_K_L | 3.5 GB |
| chinese-llama-2-7b.Q3_K_S.gguf | Q3_K_S | 2.9 GB |
| chinese-llama-2-7b.Q4_0.gguf | Q4_0 | 3.7 GB |
| chinese-llama-2-7b.Q4_1.gguf | Q4_1 | 4.1 GB |
| chinese-llama-2-7b.Q4_K.gguf | Q4_K | 3.9 GB |
| chinese-llama-2-7b.Q4_K_S.gguf | Q4_K_S | 3.7 GB |
| chinese-llama-2-7b.Q5_0.gguf | Q5_0 | 4.5 GB |
| chinese-llama-2-7b.Q5_1.gguf | Q5_1 | 4.9 GB |
| chinese-llama-2-7b.Q5_K.gguf | Q5_K | 4.6 GB |
| chinese-llama-2-7b.Q5_K_S.gguf | Q5_K_S | 4.5 GB |
| chinese-llama-2-7b.Q6_K.gguf | Q6_K | 5.3 GB |
| chinese-llama-2-7b.Q8_0.gguf | Q8_0 | 6.9 GB |
| chinese-llama-2-7b.gguf | full | 13 GB |
## Provided images
| Name | Quant method | Size |
| -------------------------------------------- | ------------ | ------- |
| `shaowenchen/chinese-llama-2-7b-gguf:Q2_K` | Q2_K | 3.68 GB |
| `shaowenchen/chinese-llama-2-7b-gguf:Q3_K` | Q3_K | 4.16 GB |
| `shaowenchen/chinese-llama-2-7b-gguf:Q3_K_L` | Q3_K_L | 4.46 GB |
| `shaowenchen/chinese-llama-2-7b-gguf:Q3_K_S` | Q3_K_S | 3.81 GB |
| `shaowenchen/chinese-llama-2-7b-gguf:Q4_0` | Q4_0 | 4.7 GB |
| `shaowenchen/chinese-llama-2-7b-gguf:Q4_K` | Q4_K | 4.95 GB |
| `shaowenchen/chinese-llama-2-7b-gguf:Q4_K_S` | Q4_K_S | 4.73 GB |
```
docker run --rm -p 8000:8000 shaowenchen/chinese-llama-2-7b-gguf:Q2_K
```
and you can view http://localhost:8000/docs to see the swagger UI.
|
TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF | TheBloke | 2023-10-12T04:41:06Z | 549 | 10 | adapter-transformers | [
"adapter-transformers",
"gguf",
"mistral",
"chemistry",
"biology",
"climate",
"science",
"philosophy",
"nature",
"ecology",
"biomimicry",
"fauna",
"flora",
"dataset:Severian/Biomimicry",
"dataset:emrgnt-cmplxty/sciphi-textbooks-are-all-you-need",
"dataset:fmars/wiki_stem",
"dataset:fblgit/tree-of-knowledge",
"dataset:Severian/Bio-Design-Process",
"base_model:Severian/ANIMA-Phi-Neptune-Mistral-7B",
"license:mit",
"region:us"
]
| null | 2023-10-12T04:29:57Z | ---
base_model: Severian/ANIMA-Phi-Neptune-Mistral-7B
datasets:
- Severian/Biomimicry
- emrgnt-cmplxty/sciphi-textbooks-are-all-you-need
- fmars/wiki_stem
- fblgit/tree-of-knowledge
- Severian/Bio-Design-Process
inference: false
library_name: adapter-transformers
license: mit
model_creator: Severian
model_name: Anima Phi Neptune Mistral 7B
model_type: mistral
prompt_template: '[INST] {prompt} [/INST]
'
quantized_by: TheBloke
tags:
- chemistry
- biology
- climate
- science
- philosophy
- nature
- ecology
- biomimicry
- fauna
- flora
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Anima Phi Neptune Mistral 7B - GGUF
- Model creator: [Severian](https://huggingface.co/Severian)
- Original model: [Anima Phi Neptune Mistral 7B](https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Severian's Anima Phi Neptune Mistral 7B](https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF)
* [Severian's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: INST
```
[INST] {prompt} [/INST]
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [anima-phi-neptune-mistral-7b.Q2_K.gguf](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF/blob/main/anima-phi-neptune-mistral-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [anima-phi-neptune-mistral-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF/blob/main/anima-phi-neptune-mistral-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [anima-phi-neptune-mistral-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF/blob/main/anima-phi-neptune-mistral-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [anima-phi-neptune-mistral-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF/blob/main/anima-phi-neptune-mistral-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [anima-phi-neptune-mistral-7b.Q4_0.gguf](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF/blob/main/anima-phi-neptune-mistral-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [anima-phi-neptune-mistral-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF/blob/main/anima-phi-neptune-mistral-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [anima-phi-neptune-mistral-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF/blob/main/anima-phi-neptune-mistral-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [anima-phi-neptune-mistral-7b.Q5_0.gguf](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF/blob/main/anima-phi-neptune-mistral-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [anima-phi-neptune-mistral-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF/blob/main/anima-phi-neptune-mistral-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [anima-phi-neptune-mistral-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF/blob/main/anima-phi-neptune-mistral-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [anima-phi-neptune-mistral-7b.Q6_K.gguf](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF/blob/main/anima-phi-neptune-mistral-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [anima-phi-neptune-mistral-7b.Q8_0.gguf](https://huggingface.co/TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF/blob/main/anima-phi-neptune-mistral-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF and below it, a specific filename to download, such as: anima-phi-neptune-mistral-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF anima-phi-neptune-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF anima-phi-neptune-mistral-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m anima-phi-neptune-mistral-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] {prompt} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/ANIMA-Phi-Neptune-Mistral-7B-GGUF", model_file="anima-phi-neptune-mistral-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Severian's Anima Phi Neptune Mistral 7B
# ANIMA-Phi-Neptune-Mistral-7B: Biomimicry Enhanced LLM
<img src="https://cdn-uploads.huggingface.co/production/uploads/64740cf7485a7c8e1bd51ac9/JZH6p50t_j3-OUph4Wq6y.png" width="500">
## Overview
**ANIMA** (Advanced Nature Inspired Multidisciplinary Assistant) is an expert in various scientific disciplines, including but not limited to biomimicry, biology, and environmental science.
---
## Model Description
ANIMA is fine-tuned on a rich dataset encompassing:
- 4,000+ Nature-Biomimicry examples
- 60k Biomimicry Design Process examples
- 600k STEM facts from Wikipedia
- Science/Philosophy focused 'All-You-Need-Is-Textbooks' dataset
- Additional Tree of Knowledge + Biomimicry data combined fine-tuning
The model aims to assist users in solving problems using nature-inspired strategies and concepts.
### Special Features
- **Multi-disciplinary Expertise**: Knowledge across various scientific and philosophical domains.
- **Biomimicry Design Process**: Incorporates a dataset generated by Mistral and Minotaur-15B. The dataset was then intricately processed by a real person to ensure factuality and grounding.
---
## Datasets
- `Severian/Biomimicry` (1st Fine-Tune)
- `emrgnt-cmplxty/sciphi-textbooks-are-all-you-need` (2nd Fine-Tune)
- `Severian/Bio-Design-Process` & `fmars/wiki_stem` (3rd Fine-Tune)
- `fblgit/tree-of-knowledge & `Severian/Biomimicry` (4th Fine-Tune)
- Combined all previous datasets into one giant dataset for a quick fine-tuning for a total of 730k data points (5th Fine-Tune)
## Training Stages
1. **Base Model**: `ehartford/dolphin-2.0-mistral-7b`
2. **1st Fine-Tune**: `ANIMA-Echo-Mistral-7B-v1` - 1 Epoch x 5hrs on 1 x T4 Small
3. **2nd Fine-Tune**: `ANIMA-Phi-Neptune-Mistral-v1`-: 1 Epoch x 1hr on 1 x A100
4. **3rd Fine-Tune**: `ANIMA-Phi-Neptune-Mistral-7B-v2` - 1 Epoch x 1hr on 1 x A100
5. **4th Fine-Tune**: 'ANIMA-Phi-Neptune-Mistral-7B-v3' - 1 Epoch x 1hr on 1 x A100
6. **5th Fine-Tune**: 'ANIMA-Phi-Neptune-Mistral-7B-v4' - 1 Epoch x 15mins on 1 x A100
---
## 📝 Usage
ANIMA is designed around the Biomimicry Design Process to guide users in solving problems through biomimicry. For more information and examples, please visit [Biomimicry Design Process](https://toolbox.biomimicry.org/methods/process/).
## 🏁 Benchmarks
- Average - 62.22
- ARC - 56.83
- HellaSwag - 78.82
- MMLU - 53.84
- TruthfulQA - 59.40
---
- **EXAMPLE OUTPUT**
[INST] How can biomimicry help in water purification? [/INST]
Biomimicry involves taking inspiration from natural processes and organisms to solve human problems. In the context of water purification, biomimicry can help develop sustainable and efficient methods for cleaning water. One specific example of a nature-inspired strategy to solve this problem is the use of biofiltration, which involves using microorganisms to remove pollutants from water.
Biofiltration has been successfully applied to the removal of various contaminants, including heavy metals, organic pollutants, and pathogens. For example, a study by Zhang et al. (2011) demonstrated that a mixed culture of Pseudomonas putida and Bacillus subtilis could effectively remove lead and cadmium from contaminated water. The researchers found that the biofilm formed by the mixed culture had a higher removal efficiency than that formed by either species alone. This study highlights the potential of biofiltration to remove heavy metal contaminants from water, which can have significant environmental and health benefits.
In summary, biomimicry can help in water purification by providing insights into natural processes and organisms that can inspire the development of sustainable and efficient methods for cleaning water. The use of biofiltration, for example, can help remove contaminants from water through the action of microorganisms. Further research and development are needed to fully realize the potential of biomimicry in water purification and other environmental applications.
---
## 📜 License
This project is licensed under MIT
<!-- original-model-card end -->
|
Exscientia/IgT5 | Exscientia | 2024-06-14T10:01:49Z | 549 | 1 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"antibody language model",
"antibody",
"protein language model",
"arxiv:2403.17889",
"base_model:Exscientia/IgT5_unpaired",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text2text-generation | 2024-03-26T15:58:05Z | ---
tags:
- antibody language model
- antibody
- protein language model
base_model: Exscientia/IgT5_unpaired
license: mit
---
# IgT5 model
Model pretrained on protein and antibody sequences using a masked language modeling (MLM) objective. It was introduced in the paper [Large scale paired antibody language models](https://arxiv.org/abs/2403.17889).
The model is finetuned from IgT5-unpaired using paired antibody sequences from the [Observed Antibody Space](https://opig.stats.ox.ac.uk/webapps/oas/).
# Use
The encoder part of the model and tokeniser can be loaded using the `transformers` library
```python
from transformers import T5EncoderModel, T5Tokenizer
tokeniser = T5Tokenizer.from_pretrained("Exscientia/IgT5", do_lower_case=False)
model = T5EncoderModel.from_pretrained("Exscientia/IgT5")
```
The tokeniser is used to prepare batch inputs
```python
# heavy chain sequences
sequences_heavy = [
"VQLAQSGSELRKPGASVKVSCDTSGHSFTSNAIHWVRQAPGQGLEWMGWINTDTGTPTYAQGFTGRFVFSLDTSARTAYLQISSLKADDTAVFYCARERDYSDYFFDYWGQGTLVTVSS",
"QVQLVESGGGVVQPGRSLRLSCAASGFTFSNYAMYWVRQAPGKGLEWVAVISYDGSNKYYADSVKGRFTISRDNSKNTLYLQMNSLRTEDTAVYYCASGSDYGDYLLVYWGQGTLVTVSS"
]
# light chain sequences
sequences_light = [
"EVVMTQSPASLSVSPGERATLSCRARASLGISTDLAWYQQRPGQAPRLLIYGASTRATGIPARFSGSGSGTEFTLTISSLQSEDSAVYYCQQYSNWPLTFGGGTKVEIK",
"ALTQPASVSGSPGQSITISCTGTSSDVGGYNYVSWYQQHPGKAPKLMIYDVSKRPSGVSNRFSGSKSGNTASLTISGLQSEDEADYYCNSLTSISTWVFGGGTKLTVL"
]
# The tokeniser expects input of the form ["V Q ... S S </s> E V ... I K", ...]
paired_sequences = []
for sequence_heavy, sequence_light in zip(sequences_heavy, sequences_light):
paired_sequences.append(' '.join(sequence_heavy)+' </s> '+' '.join(sequence_light))
tokens = tokeniser.batch_encode_plus(
paired_sequences,
add_special_tokens=True,
pad_to_max_length=True,
return_tensors="pt",
return_special_tokens_mask=True
)
```
Note that the tokeniser adds a `</s>` token at the end of each paired sequence and pads using the `<pad>` token. For example a batch containing sequences `V Q L </s> E V V`, `Q V </s> A L` will be tokenised to `V Q L </s> E V V </S>` and `Q V </s> A L </s> <pad> <pad>`.
Sequence embeddings are generated by feeding tokens through the model
```python
output = model(
input_ids=tokens['input_ids'],
attention_mask=tokens['attention_mask']
)
residue_embeddings = output.last_hidden_state
```
To obtain a sequence representation, the residue tokens can be averaged over like so
```python
import torch
# mask special tokens before summing over embeddings
residue_embeddings[tokens["special_tokens_mask"] == 1] = 0
sequence_embeddings_sum = residue_embeddings.sum(1)
# average embedding by dividing sum by sequence lengths
sequence_lengths = torch.sum(tokens["special_tokens_mask"] == 0, dim=1)
sequence_embeddings = sequence_embeddings_sum / sequence_lengths.unsqueeze(1)
```
|
LiteLLMs/Phi-3-mini-128k-instruct-GGUF | LiteLLMs | 2024-05-23T21:08:31Z | 549 | 0 | null | [
"gguf",
"nlp",
"code",
"GGUF",
"text-generation",
"en",
"license:mit",
"region:us"
]
| text-generation | 2024-05-23T12:52:21Z |
---
language:
- en
license: mit
tags:
- nlp
- code
- GGUF
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
quantized_by: andrijdavid
---
# Phi-3-mini-128k-instruct-GGUF
- Original model: [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Phi-3-mini-128k-instruct-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Phi-3-mini-128k-instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Phi-3-mini-128k-instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Phi-3-mini-128k-instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Phi-3-mini-128k-instruct
## Model Summary
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.
This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.
When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook)
| | Short Context | Long Context |
| | -- | | -- | | ----- |
| MMLU <br>5-Shot | 68.1 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 74.5 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 83.6 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 55.3 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 36.9 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 57.1 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.0 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 95.2 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 83.6 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.1 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.5 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 72.5 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65.0 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 80.6 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 78.7 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 78.0 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 63.2 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 57.9 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 62.5 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model [here](https://aka.ms/phi3-mini-128k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
<!-- original-model-card end -->
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.