modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
tatsu-lab/alpaca-7b-wdiff | tatsu-lab | 2023-05-22T20:38:13Z | 652 | 53 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-04-13T04:46:48Z | ---
license: cc-by-nc-4.0
---
### Stanford Alpaca-7B
This repo hosts the weight diff for [Stanford Alpaca-7B](https://github.com/tatsu-lab/stanford_alpaca/) that can be used to reconstruct the original model weights when applied to Meta's LLaMA weights.
To recover the original Alpaca-7B weights, follow these steps:
```text
1. Convert Meta's released weights into huggingface format. Follow this guide:
https://huggingface.co/docs/transformers/main/model_doc/llama
2. Make sure you cloned the released weight diff into your local machine. The weight diff is located at:
https://huggingface.co/tatsu-lab/alpaca-7b/tree/main
3. Run this function with the correct paths. E.g.,
python weight_diff.py recover --path_raw <path_to_step_1_dir> --path_diff <path_to_step_2_dir> --path_tuned <path_to_store_recovered_weights>
```
Once step 3 completes, you should have a directory with the recovered weights, from which you can load the model like the following
```python
import transformers
alpaca_model = transformers.AutoModelForCausalLM.from_pretrained("<path_to_store_recovered_weights>")
alpaca_tokenizer = transformers.AutoTokenizer.from_pretrained("<path_to_store_recovered_weights>")
``` |
timm/skresnet34.ra_in1k | timm | 2023-04-24T00:19:27Z | 652 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1903.06586",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-04-24T00:19:05Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for skresnet34.ra_in1k
SKNet (Selective-Kernel ResNet) image classification model. Trained on ImageNet-1k in `timm` by Ross Wightman using `RA` recipe (ResNet strikes back `B` variant).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 22.3
- GMACs: 3.7
- Activations (M): 5.1
- Image size: 224 x 224
- **Papers:**
- Selective Kernel Networks: https://arxiv.org/abs/1903.06586
- **Dataset:** ImageNet-1k
- **Original:**
- https://github.com/huggingface/pytorch-image-models
- https://github.com/clovaai/assembled-cnn
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('skresnet34.ra_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'skresnet34.ra_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 256, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'skresnet34.ra_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{li2019selective,
title={Selective Kernel Networks},
author={Li, Xiang and Wang, Wenhai and Hu, Xiaolin and Yang, Jian},
journal={IEEE Conference on Computer Vision and Pattern Recognition},
year={2019}
}
```
|
timm/convmixer_1024_20_ks9_p14.in1k | timm | 2023-04-24T03:13:50Z | 652 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.09792",
"license:mit",
"region:us"
] | image-classification | 2023-04-24T03:13:31Z | ---
tags:
- image-classification
- timm
library_name: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for convmixer_1024_20_ks9_p14.in1k
A ConvMixer image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 24.4
- GMACs: 5.5
- Activations (M): 5.5
- Image size: 224 x 224
- **Papers:**
- Patches Are All You Need?: https://arxiv.org/abs/2201.09792
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/locuslab/convmixer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convmixer_1024_20_ks9_p14.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convmixer_1024_20_ks9_p14.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 16, 16) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Chen2021CrossViTCM,
title={CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification},
author={Chun-Fu Chen and Quanfu Fan and Rameswar Panda},
journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2021},
pages={347-356}
}
```
|
filipealmeida/llama-2-7b-pii-transform | filipealmeida | 2023-08-31T02:19:29Z | 652 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-31T01:18:24Z | Entry not found |
notmahi/dobb-e | notmahi | 2023-11-28T03:44:47Z | 652 | 57 | timm | [
"timm",
"pytorch",
"safetensors",
"robotics",
"vision",
"arxiv:2311.16098",
"license:mit",
"region:us"
] | robotics | 2023-11-24T16:25:08Z | ---
license: mit
library_name: timm
tags:
- robotics
- vision
pipeline_tag: robotics
---
<img style="max-width: 720px;" src=https://cdn-uploads.huggingface.co/production/uploads/630e567f8df86f1e5bf0d837/CISEAH0AbTJVDJuZWkqFK.jpeg></img>
# Dobb·E
[Project webpage](https://dobb-e.com) · [Documentation (gitbooks)](https://docs.dobb-e.com) · [Paper](https://arxiv.org/abs/2311.16098)
**Authors**: [Mahi Shafiullah*](https://mahis.life), [Anant Rai*](https://raianant.github.io/), [Haritheja Etukuru](https://haritheja.com/), [Yiqian Liu](https://www.linkedin.com/in/eva-liu-ba90a5209/), [Ishan Misra](https://imisra.github.io/), [Soumith Chintala](https://soumith.ch), [Lerrel Pinto](https://lerrelpinto.com)
Open-source repository of the Home Pretrained Representation (HPR) of [Dobb·E](https://dobb-e.com) and the associated paper, [On Bringing Robots Home](https://arxiv.org/abs/2311.16098)
<video autoplay muted style="max-width: 720px;" src="https://cdn-uploads.huggingface.co/production/uploads/630e567f8df86f1e5bf0d837/tmL48wY0F8eL2Mluizrw3.mp4"></video>
## What's on this repo
You can find our [Home Pretrained Models (HPR)](https://dobb-e.com/#models), which is a ResNet34 model trained on our dataset, [Homes of New York (HoNY)](https://dobb-e.com/#dataset), in this repo. You can download the weights if you want, or you can get started by using [Timm](https://huggingface.co/docs/timm/index).
```python
import timm
model = timm.create_model("hf_hub:notmahi/dobb-e", pretrained=True)
```
You can read more about it on our [paper](https://arxiv.org/abs/2311.16098) or our [website](https://dobb-e.com).
Let's bring some robots home! |
albacore/5GgZxGvgqNQSdZnb3XWYZGh5ZAGywMt56zuJQ4EbduCvjV9P_cnn | albacore | 2024-02-13T09:39:22Z | 652 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-02-03T16:23:16Z | Entry not found |
thesven/Llama-3-Refueled-GGUF | thesven | 2024-05-22T11:42:55Z | 652 | 0 | transformers | [
"transformers",
"gguf",
"data labeling",
"en",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T05:05:36Z | ---
license: cc-by-nc-4.0
language:
- en
library_name: transformers
tags:
- data labeling
---
<div style="width: auto; margin-left: auto; margin-right: auto; background-color:black">
<img src="https://assets-global.website-files.com/6423879a8f63c1bb18d74bfa/648818d56d04c3bdf36d71ab_Refuel_rev8-01_ts-p-1600.png" alt="Refuel.ai" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
## Quantization Description
This repo contains GGUF quantized versions of the Refuel Ai Llama 3 Refueled .
The model is supplied in different quantizations so that you can see what works best on the hardware you would like to run it on.
The repo contains quantizations in the following types:
- Q4_0
- Q4_1
- Q4_K
- Q4_K_S
- Q4_K_M
- Q5_0
- Q5_1
- Q5_K
- Q5_K_M
- Q5_K_S
- Q6_K
- Q8_0
- Q2_K
- Q3_K
- Q3_K_S
- Q3_K_XS
<div style="text-align: center;">
<a href="https://github.com/thesven/GGUF-n-Go">
<img src="https://github.com/thesven/GGUF-n-Go/blob/main/assets/quantized_with.png?raw=true" alt="image/png" style="max-width: 350px;">
</a>
</div>
## Model Details
RefuelLLM-2-small, aka Llama-3-Refueled, is a Llama3-8B base model instruction tuned on a corpus of 2750+ datasets, spanning tasks such as classification, reading comprehension, structured attribute extraction and entity resolution. We're excited to open-source the model for the community to build on top of.
* More details about [RefuelLLM-2 family of models](https://www.refuel.ai/blog-posts/announcing-refuel-llm-2)
* You can also try out the models in our [LLM playground](https://labs.refuel.ai/playground)
**Model developers** - Refuel AI
**Input** - Text only.
**Output** - Text only.
**Architecture** - Llama-3-Refueled is built on top of Llama-3-8B-instruct which is an auto-regressive language model that uses an optimized transformer architecture.
**Release Date** - May 8, 2024.
**License** - [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/deed.en)
```
## Training Data
The model was both trained on over 4 Billion tokens, spanning 2750+ NLP tasks. Our training collection consists majorly of:
1. Human annotated datasets like Flan, Task Source, and the Aya collection
2. Synthetic datasets like OpenOrca, OpenHermes and WizardLM
3. Proprietary datasets developed or licensed by Refuel AI
## Benchmarks
In this section, we report the results for Refuel models on our benchmark of labeling tasks. For details on the methodology see [here](https://refuel.ai/blog-posts/announcing-refuel-llm-2).
<table>
<tr></tr>
<tr><th>Provider</th><th>Model</th><th colspan="4" style="text-align: center">LLM Output Quality (by task type)</tr>
<tr><td></td><td></td><td>Overall</td><td>Classification</td><td>Reading Comprehension</td><td>Structure Extraction</td><td>Entity Matching</td><td></td></tr>
<tr><td>Refuel</td><td>RefuelLLM-2</td><td>83.82%</td><td>84.94%</td><td>76.03%</td><td>88.16%</td><td>92.00%</td><td></td></tr>
<tr><td>OpenAI</td><td>GPT-4-Turbo</td><td>80.88%</td><td>81.77%</td><td>72.08%</td><td>84.79%</td><td>97.20%</td><td></td></tr>
<tr><td>Refuel</td><td>RefuelLLM-2-small (Llama-3-Refueled)</td><td>79.67%</td><td>81.72%</td><td>70.04%</td><td>84.28%</td><td>92.00%</td><td></td></tr>
<tr><td>Anthropic</td><td>Claude-3-Opus</td><td>79.19%</td><td>82.49%</td><td>67.30%</td><td>88.25%</td><td>94.96%</td><td></td></tr>
<tr><td>Meta</td><td>Llama3-70B-Instruct</td><td>78.20%</td><td>79.38%</td><td>66.03%</td><td>85.96%</td><td>94.13%</td><td></td></tr>
<tr><td>Google</td><td>Gemini-1.5-Pro</td><td>74.59%</td><td>73.52%</td><td>60.67%</td><td>84.27%</td><td>98.48%</td><td></td></tr>
<tr><td>Mistral</td><td>Mixtral-8x7B-Instruct</td><td>62.87%</td><td>79.11%</td><td>45.56%</td><td>47.08%</td><td>86.52%</td><td></td></tr>
<tr><td>Anthropic</td><td>Claude-3-Sonnet</td><td>70.99%</td><td>79.91%</td><td>45.44%</td><td>78.10%</td><td>96.34%</td><td></td></tr>
<tr><td>Anthropic</td><td>Claude-3-Haiku</td><td>69.23%</td><td>77.27%</td><td>50.19%</td><td>84.97%</td><td>54.08%</td><td></td></tr>
<tr><td>OpenAI</td><td>GPT-3.5-Turbo</td><td>68.13%</td><td>74.39%</td><td>53.21%</td><td>69.40%</td><td>80.41%</td><td></td></tr>
<tr><td>Meta</td><td>Llama3-8B-Instruct</td><td>62.30%</td><td>68.52%</td><td>49.16%</td><td>65.09%</td><td>63.61%</td><td></td></tr>
</table>
## Limitations
The Llama-3-Refueled does not have any moderation mechanisms. We're looking forward to engaging with the community
on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. |
Alphacode-AI/Alphacode-MALI-11B_slowtest | Alphacode-AI | 2024-05-29T02:18:39Z | 652 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"ko",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-29T00:54:16Z | ---
license: cc-by-4.0
language:
- ko
pipeline_tag: text-generation
tags:
- merge
---


MALI-11B (Model with Auto Learning Ideation) is a merge version of Alphacode's Models that has been fine-tuned with Our In House CustomData.
Train Spec : We utilized an A100x8 for training our model with DeepSpeed / HuggingFace TRL Trainer / HuggingFace Accelerate
Contact : Alphacode Co. [https://alphacode.ai/] |
mradermacher/d-solar-10.7b-orpo-v1.0-GGUF | mradermacher | 2024-06-14T10:59:02Z | 652 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:DavidAhn/d-solar-10.7b-orpo-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-14T10:21:12Z | ---
base_model: DavidAhn/d-solar-10.7b-orpo-v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAhn/d-solar-10.7b-orpo-v1.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/d-solar-10.7b-orpo-v1.0-GGUF/resolve/main/d-solar-10.7b-orpo-v1.0.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
yichaodu/DiffusionDPO-bias-hps-2.1 | yichaodu | 2024-06-20T12:03:23Z | 652 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"region:us"
] | text-to-image | 2024-06-19T08:58:03Z | ---
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
---
# Aligned Diffusion Model via DPO
Diffusion Model Aligned with thef following reward model and DPO algorithm
```
close-sourced vlm: claude3-opus gemini-1.5 gpt-4o gpt-4v
open-sourced vlm: internvl-1.5
score model: hps-2.1
```
## How to Use
You can load the model and perform inference as follows:
```python
from diffusers import StableDiffusionPipeline, UNet2DConditionModel
pretrained_model_name = "runwayml/stable-diffusion-v1-5"
dpo_unet = UNet2DConditionModel.from_pretrained(
"path/to/checkpoint",
subfolder='unet',
torch_dtype=torch.float16
).to('cuda')
pipeline = StableDiffusionPipeline.from_pretrained(pretrained_model_name, torch_dtype=torch.float16)
pipeline = pipeline.to('cuda')
pipeline.safety_checker = None
pipeline.unet = dpo_unet
generator = torch.Generator(device='cuda')
generator = generator.manual_seed(1)
prompt = "a pink flower"
image = pipeline(prompt=prompt, generator=generator, guidance_scale=gs).images[0]
```
## Citation
```
@misc{mjbench2024mjbench,
title={MJ-BENCH: Is Your Multimodal Reward Model Really a Good Judge?},
author={Chen*, Zhaorun and Du*, Yichao and Wen, Zichen and Zhou, Yiyang and Cui, Chenhang and Weng, Zhenzhen and Tu, Haoqin and Wang, Chaoqi and Tong, Zhengwei and HUANG, Leria and Chen, Canyu and Ye Qinghao and Zhu, Zhihong and Zhang, Yuqing and Zhou, Jiawei and Zhao, Zhuokai and Rafailov, Rafael and Finn, Chelsea and Yao, Huaxiu},
year={2024}
}
``` |
exocet25/open_llama_7b_v2-Q8_0-GGUF | exocet25 | 2024-06-24T00:38:31Z | 652 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"base_model:openlm-research/open_llama_7b_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-24T00:38:02Z | ---
base_model: openlm-research/open_llama_7b_v2
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# exocet25/open_llama_7b_v2-Q8_0-GGUF
This model was converted to GGUF format from [`openlm-research/open_llama_7b_v2`](https://huggingface.co/openlm-research/open_llama_7b_v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/openlm-research/open_llama_7b_v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo exocet25/open_llama_7b_v2-Q8_0-GGUF --hf-file open_llama_7b_v2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo exocet25/open_llama_7b_v2-Q8_0-GGUF --hf-file open_llama_7b_v2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo exocet25/open_llama_7b_v2-Q8_0-GGUF --hf-file open_llama_7b_v2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo exocet25/open_llama_7b_v2-Q8_0-GGUF --hf-file open_llama_7b_v2-q8_0.gguf -c 2048
```
|
NikolayKozloff/Replete-Coder-Qwen2-1.5b-Q8_0-GGUF | NikolayKozloff | 2024-06-24T10:02:54Z | 652 | 1 | null | [
"gguf",
"region:us"
] | null | 2024-06-24T10:02:45Z | Entry not found |
facebook/s2t-medium-mustc-multilingual-st | facebook | 2023-01-24T16:31:33Z | 651 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"audio",
"speech-translation",
"en",
"de",
"nl",
"es",
"fr",
"it",
"pt",
"ro",
"ru",
"dataset:mustc",
"arxiv:2010.05171",
"arxiv:1904.08779",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- en
- de
- nl
- es
- fr
- it
- pt
- ro
- ru
datasets:
- mustc
tags:
- audio
- speech-translation
- automatic-speech-recognition
pipeline_tag: automatic-speech-recognition
license: mit
---
# S2T-MEDIUM-MUSTC-MULTILINGUAL-ST
`s2t-medium-mustc-multilingual-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Multilingual Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to French text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
For multilingual speech translation models, `eos_token_id` is used as the `decoder_start_token_id` and
the target language id is forced as the first generated token. To force the target language id as the first
generated token, pass the `forced_bos_token_id` parameter to the `generate()` method. The following
example shows how to transate English speech to French and German text using the `facebook/s2t-medium-mustc-multilingual-st`
checkpoint.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-mustc-multilingual-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-mustc-multilingual-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
# translate English Speech To French Text
generated_ids = model.generate(
input_ids=inputs["input_features"],
attention_mask=inputs["attention_mask"],
forced_bos_token_id=processor.tokenizer.lang_code_to_id["fr"]
)
translation_fr = processor.batch_decode(generated_ids)
# translate English Speech To German Text
generated_ids = model.generate(
input_ids=inputs["input_features"],
attention_mask=inputs["attention_mask"],
forced_bos_token_id=processor.tokenizer.lang_code_to_id["de"]
)
translation_de = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-medium-mustc-multilingual-st is trained on [MuST-C](https://ict.fbk.eu/must-c/).
MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems
for speech translation from English into several languages. For each target language, MuST-C comprises several hundred
hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual
transcriptions and translations.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for multilingual ASR. For multilingual models, target language ID token
is used as target BOS.
## Evaluation results
MuST-C test results (BLEU score):
| En-De | En-Nl | En-Es | En-Fr | En-It | En-Pt | En-Ro | En-Ru |
|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| 24.5 | 28.6 | 28.2 | 34.9 | 24.6 | 31.1 | 23.8 | 16.0 |
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
``` |
Bingsu/my-korean-stable-diffusion-v1-5 | Bingsu | 2023-05-17T10:14:35Z | 651 | 26 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"ko",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-11-09T01:28:33Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
language: ko
---
# my-korean-stable-diffusion-v1-5
It's [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model, just text encoder and tokenizer are replaced with my [Bingsu/clip-vit-large-patch14-ko](https://huggingface.co/Bingsu/clip-vit-large-patch14-ko).
If you are looking for a Korean diffusion model that works well in practice, see:
- [BAAI/AltDiffusion-m9](https://huggingface.co/BAAI/AltDiffusion-m9)
- [Multilingual Stable Diffusion Pipeline](https://github.com/huggingface/diffusers/tree/main/examples/community#multilingual-stable-diffusion-pipeline)
# Usage
```sh
pip install transformers accelerate>=0.14.0 diffusers>=0.7.2 ftfy
```
```python
import torch
from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler
repo = "Bingsu/my-korean-stable-diffusion-v1-5"
euler_ancestral_scheduler = EulerAncestralDiscreteScheduler.from_config(repo, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(
repo, scheduler=euler_ancestral_scheduler, torch_dtype=torch.float16,
)
pipe.to("cuda")
```
```python
prompt = "화성에서 말을 타고 있는 우주인 사진"
seed = 23957
generator = torch.Generator("cuda").manual_seed(seed)
image = pipe(prompt, num_inference_steps=25, generator=generator).images[0]
```
```python
image
```

## more examples
```python
prompt = "고퀄리티 하얀 고양이 사진"
seed = 46399
generator = torch.Generator("cuda").manual_seed(seed)
pipe(prompt, num_inference_steps=25, generator=generator).images[0]
```

```python
prompt = "고퀄리티 하얀 고양이 사진, 피아노를 치는 중"
seed = 12345
generator = torch.Generator("cuda").manual_seed(seed)
pipe(prompt, num_inference_steps=25, generator=generator).images[0]
```

```python
prompt = "달과 별이 보이는 밤하늘을 배경으로 한 해변가 사진"
seed = 1234246
generator = torch.Generator("cuda").manual_seed(seed)
pipe(prompt, num_inference_steps=25, generator=generator).images[0]
```

# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. |
flax/midjourney-v4-diffusion | flax | 2023-05-16T09:26:19Z | 651 | 59 | diffusers | [
"diffusers",
"TPU",
"JAX",
"Flax",
"stable-diffusion",
"text-to-image",
"en",
"license:openrail",
"diffusers:FlaxStableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-11-09T04:10:59Z | ---
license: openrail
library_name: diffusers
tags:
- TPU
- JAX
- Flax
- stable-diffusion
- text-to-image
language:
- en
---
|
timm/coatnet_rmlp_nano_rw_224.sw_in1k | timm | 2023-05-10T23:51:08Z | 651 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"arxiv:2111.09883",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-01-20T21:29:30Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for coatnet_rmlp_nano_rw_224.sw_in1k
A timm specific CoAtNet (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 15.1
- GMACs: 2.6
- Activations (M): 20.3
- Image size: 224 x 224
- **Papers:**
- CoAtNet: Marrying Convolution and Attention for All Data Sizes: https://arxiv.org/abs/2201.03545
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('coatnet_rmlp_nano_rw_224.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coatnet_rmlp_nano_rw_224.sw_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 256, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coatnet_rmlp_nano_rw_224.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
timm/repvgg_b1.rvgg_in1k | timm | 2024-02-10T23:34:57Z | 651 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2101.03697",
"license:mit",
"region:us"
] | image-classification | 2023-03-22T07:20:00Z | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for repvgg_b1.rvgg_in1k
A RepVGG image classification model. Trained on ImageNet-1k by paper authors.
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOBNet allows configuration of:
* block / stage layout
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 57.4
- GMACs: 13.2
- Activations (M): 10.6
- Image size: 224 x 224
- **Papers:**
- RepVGG: Making VGG-style ConvNets Great Again: https://arxiv.org/abs/2101.03697
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/DingXiaoH/RepVGG
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('repvgg_b1.rvgg_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repvgg_b1.rvgg_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repvgg_b1.rvgg_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{ding2021repvgg,
title={Repvgg: Making vgg-style convnets great again},
author={Ding, Xiaohan and Zhang, Xiangyu and Ma, Ningning and Han, Jungong and Ding, Guiguang and Sun, Jian},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13733--13742},
year={2021}
}
```
|
timm/repvgg_b3g4.rvgg_in1k | timm | 2024-02-10T23:35:05Z | 651 | 2 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2101.03697",
"license:mit",
"region:us"
] | image-classification | 2023-03-22T07:25:26Z | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for repvgg_b3g4.rvgg_in1k
A RepVGG image classification model. Trained on ImageNet-1k by paper authors.
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOBNet allows configuration of:
* block / stage layout
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 83.8
- GMACs: 17.9
- Activations (M): 15.1
- Image size: 224 x 224
- **Papers:**
- RepVGG: Making VGG-style ConvNets Great Again: https://arxiv.org/abs/2101.03697
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/DingXiaoH/RepVGG
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('repvgg_b3g4.rvgg_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repvgg_b3g4.rvgg_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 192, 56, 56])
# torch.Size([1, 384, 28, 28])
# torch.Size([1, 768, 14, 14])
# torch.Size([1, 2560, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repvgg_b3g4.rvgg_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2560, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{ding2021repvgg,
title={Repvgg: Making vgg-style convnets great again},
author={Ding, Xiaohan and Zhang, Xiangyu and Ma, Ningning and Han, Jungong and Ding, Guiguang and Sun, Jian},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13733--13742},
year={2021}
}
```
|
MAGAer13/mplug-owl-llama-7b | MAGAer13 | 2023-06-06T07:26:32Z | 651 | 15 | transformers | [
"transformers",
"pytorch",
"mplug-owl",
"image-to-text",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-to-text | 2023-05-08T08:53:39Z | ---
license: apache-2.0
language:
- en
pipeline_tag: image-to-text
tags:
- mplug-owl
---
# Usage
## Get the latest codebase from Github
```Bash
git clone https://github.com/X-PLUG/mPLUG-Owl.git
```
## Model initialization
```Python
from mplug_owl.modeling_mplug_owl import MplugOwlForConditionalGeneration
from mplug_owl.tokenization_mplug_owl import MplugOwlTokenizer
from mplug_owl.processing_mplug_owl import MplugOwlImageProcessor, MplugOwlProcessor
pretrained_ckpt = 'MAGAer13/mplug-owl-llama-7b'
model = MplugOwlForConditionalGeneration.from_pretrained(
pretrained_ckpt,
torch_dtype=torch.bfloat16,
)
image_processor = MplugOwlImageProcessor.from_pretrained(pretrained_ckpt)
tokenizer = MplugOwlTokenizer.from_pretrained(pretrained_ckpt)
processor = MplugOwlProcessor(image_processor, tokenizer)
```
## Model inference
Prepare model inputs.
```Python
# We use a human/AI template to organize the context as a multi-turn conversation.
# <image> denotes an image placehold.
prompts = [
'''The following is a conversation between a curious human and AI assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: <image>
Human: Explain why this meme is funny.
AI: ''']
# The image paths should be placed in the image_list and kept in the same order as in the prompts.
# We support urls, local file paths and base64 string. You can custom the pre-process of images by modifying the mplug_owl.modeling_mplug_owl.ImageProcessor
image_list = ['https://xxx.com/image.jpg']
```
Get response.
```Python
# generate kwargs (the same in transformers) can be passed in the do_generate()
generate_kwargs = {
'do_sample': True,
'top_k': 5,
'max_length': 512
}
from PIL import Image
images = [Image.open(_) for _ in image_list]
inputs = processor(text=prompts, images=images, return_tensors='pt')
inputs = {k: v.bfloat16() if v.dtype == torch.float else v for k, v in inputs.items()}
inputs = {k: v.to(model.device) for k, v in inputs.items()}
with torch.no_grad():
res = model.generate(**inputs, **generate_kwargs)
sentence = tokenizer.decode(res.tolist()[0], skip_special_tokens=True)
print(sentence)
``` |
NikolayKozloff/Marx-3B-V2-GGUF | NikolayKozloff | 2023-08-23T04:06:08Z | 651 | 10 | null | [
"gguf",
"region:us"
] | null | 2023-08-23T03:59:35Z | Entry not found |
audreyt/Taiwan-LLM-7B-v2.0.1-chat-GGUF | audreyt | 2023-10-15T18:41:52Z | 651 | 11 | transformers | [
"transformers",
"gguf",
"text-generation",
"zh",
"license:apache-2.0",
"region:us"
] | text-generation | 2023-10-12T15:50:13Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
license: apache-2.0
language:
- zh
widget:
- text: >-
A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user's
questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:
library_name: transformers
pipeline_tag: text-generation
inference: false
quantized_by: audreyt
---
# Taiwan-LLM-7B-v2.0.1-chat - GGUF
- Model creator: [Yen-Ting Lin](https://huggingface.co/yentinglin)
- Original model: [Taiwan-LLM-7B-v2.0.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.0.1-chat)
## Description
This repo contains GGUF format model files for Yen-Ting Lin's [Taiwan LLM based on LLaMa2-7b v2.0.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.0.1-chat).
Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author.
使用Taiwan LLM必須明確地承認和歸功於原始作者。
## About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
As of August 25th, here is a list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- footer start -->
<!-- footer end -->
# Original model card
---
# Taiwan LLM based on LLaMa2-7b
continue pretraining on 20 billion tokens in traditional mandarin and instruction fine-tuning on millions of conversations.
This version does NOT include commoncrawl.
# 🌟 Checkout New [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
|
bclavie/fio-base-japanese-v0.1 | bclavie | 2023-12-19T10:28:16Z | 651 | 5 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"ja",
"dataset:shunk031/JGLUE",
"dataset:shunk031/jsnli",
"dataset:hpprc/jsick",
"dataset:miracl/miracl",
"dataset:castorini/mr-tydi",
"dataset:unicamp-dl/mmarco",
"autotrain_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2023-12-18T11:01:07Z | ---
language:
- ja
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
inference: false
datasets:
- shunk031/JGLUE
- shunk031/jsnli
- hpprc/jsick
- miracl/miracl
- castorini/mr-tydi
- unicamp-dl/mmarco
library_name: sentence-transformers
---
# fio-base-japanese-v0.1
日本語版は近日公開予定です(日本語を勉強中なので、間違いはご容赦ください!)
fio-base-japanese-v0.1 is a proof of concept, and the first release of the Fio family of Japanese embeddings. It is based on [cl-tohoku/bert-base-japanese-v3](https://huggingface.co/cl-tohoku/bert-base-japanese-v3) and trained on limited volumes of data on a single GPU.
For more information, please refer to [my notes on Fio](https://ben.clavie.eu/fio).
#### Datasets
Similarity/Entailment:
- JSTS (train)
- JSNLI (train)
- JNLI (train)
- JSICK (train)
Retrieval:
- MMARCO (Multilingual Marco) (train, 124k sentence pairs, <1% of the full data)
- Mr.TyDI (train)
- MIRACL (train, 50% sample)
- ~~JSQuAD (train, 50% sample, no LLM enhancement)~~ JSQuAD is not used in the released version, to serve as an unseen test set.
#### Results
> ⚠️ WARNING: fio-base-japanese-v0.1 has seen textual entailment tasks during its training, which is _not_ the case of the other other japanese-only models in this table. This gives Fio an unfair advantage over the previous best results, `cl-nagoya/sup-simcse-ja-[base|large]`. During mid-training evaluations, this didn't seem to greatly affect performance, however, JSICK (NLI set) was included in the training data, and therefore it's impossible to fully remove this contamination at the moment. I intend to fix this in future release, but please keep this in mind as you view the results (see JSQuAD results on the associated blog post for a fully unseen comparison, although focused on retrieval).
This is adapted and truncated (to keep only the most popular models) from [oshizo's benchmarking github repo](https://github.com/oshizo/JapaneseEmbeddingEval), please check it out for more information and give it a star as it was very useful!
Italic denotes best model for its size when a smaller model outperforms a bigger one (base/large | 768/1024), bold denotes best overall.
| Model | JSTS valid-v1.1 | JSICK test | MIRACL dev | Average |
|-------------------------------------------------|-----------------|------------|------------|---------|
| bclavie/fio-base-japanese-v0.1 | **_0.863_** | **_0.894_** | 0.718 | _0.825_ |
| cl-nagoya/sup-simcse-ja-base | 0.809 | 0.827 | 0.527 | 0.721 |
| cl-nagoya/sup-simcse-ja-large | _0.831_ | _0.831_ | 0.507 | 0.723 |
| colorfulscoop/sbert-base-ja | 0.742 | 0.657 | 0.254 | 0.551 |
| intfloat/multilingual-e5-base | 0.796 | 0.806 | __0.845__ | 0.816 |
| intfloat/multilingual-e5-large | 0.819 | 0.794 | **0.883** | **_0.832_** |
| pkshatech/GLuCoSE-base-ja | 0.818 | 0.757 | 0.692 | 0.755 |
| text-embedding-ada-002 | 0.790 | 0.789 | 0.7232 | 0.768 |
## Usage
This model requires both `fugashi` and `unidic-lite`:
```
pip install -U fugashi unidic-lite
```
If using for a retrieval task, you must prefix your query with `"関連記事を取得するために使用できるこの文の表現を生成します: "`.
### Usage (Sentence-Transformers)
This model is best used through [sentence-transformers](https://www.SBERT.net). If you don't have it, it's easy to install:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["こんにちは、世界!", "文埋め込み最高!文埋め込み最高と叫びなさい", "極度乾燥しなさい"]
model = SentenceTransformer('bclavie/fio-base-japanese-v0.1')
embeddings = model.encode(sentences)
print(embeddings)
```
### Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Citing & Authors
```@misc{
bclavie-fio-embeddings,
author = {Benjamin Clavié},
title = {Fio Japanese Embeddings},
year = {2023},
howpublished = {\url{https://ben.clavie.eu/fio}}
}``` |
BioMistral/BioMistral-7B-SLERP | BioMistral | 2024-02-19T15:37:44Z | 651 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"slerp",
"medical",
"biology",
"conversational",
"fr",
"en",
"es",
"it",
"pl",
"nl",
"de",
"dataset:pubmed",
"arxiv:2402.10373",
"base_model:BioMistral/BioMistral-7B",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-03T16:29:06Z | ---
base_model:
- BioMistral/BioMistral-7B
- mistralai/Mistral-7B-Instruct-v0.1
library_name: transformers
tags:
- mergekit
- merge
- slerp
- medical
- biology
license: apache-2.0
datasets:
- pubmed
language:
- fr
- en
- es
- it
- pl
- nl
- de
pipeline_tag: text-generation
---
# BioMistral-7B-slerp
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.1
layer_range: [0, 32]
- model: BioMistral/BioMistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
<p align="center">
<img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/>
</p>
# BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
**Abstract:**
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges.
In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released.
**Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes.
# 1. BioMistral models
**BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC.
| Model Name | Base Model | Model Type | Sequence Length | Download |
|:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:|
| BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) |
| BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) |
| BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) |
# 2. Quantized Models
| Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download |
|:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:|
| BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) |
| BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) |
| BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) |
| BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) |
# 2. Using BioMistral
You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
Loading the model and tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
model = AutoModel.from_pretrained("BioMistral/BioMistral-7B")
```
# 3. Supervised Fine-tuning Benchmark
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. |
|-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------|
| **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 |
| **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 |
| | | | | | | | | | | | |
| **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 |
| **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** |
| **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 |
| **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> |
| | | | | | | | | | | | |
| **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 |
| **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 |
| **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 |
| **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 |
| | | | | | | | | | | | |
| **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 |
Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT.
# Citation BibTeX
Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373)
```bibtex
@misc{labrak2024biomistral,
title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains},
author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour},
year={2024},
eprint={2402.10373},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
|
phannhat/CRF_Transformer_Whisper_tiny_en | phannhat | 2024-05-11T04:13:00Z | 651 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-11T04:12:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LiteLLMs/aya-23-8B-GGUF | LiteLLMs | 2024-05-28T13:51:46Z | 651 | 0 | transformers | [
"transformers",
"gguf",
"GGUF",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"arxiv:2405.15032",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-05-24T14:26:08Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
library_name: transformers
tags:
- GGUF
inference: false
quantized_by: andrijdavid
---
# aya-23-8B-GGUF
- Original model: [aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/aya-23-8B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/aya-23-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/aya-23-8B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/aya-23-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: aya-23-8B
# Model Card for Aya-23-8B
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-35B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 8 billion parameters
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install transformers==4.41.1
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-8B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aryabumi2024aya,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi and John Dang and Dwarak Talupuru and Saurabh Dash and David Cairuz and Hangyu Lin and Bharat Venkitesh and Madeline Smith and Kelly Marchisio and Sebastian Ruder and Acyr Locatelli and Julia Kreutzer and Nick Frosst and Phil Blunsom and Marzieh Fadaee and Ahmet Üstün and Sara Hooker},
year={2024},
eprint={2405.15032},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- original-model-card end -->
|
picAIso/TARS-8B | picAIso | 2024-05-31T21:14:04Z | 651 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"merging",
"llama3",
"merged",
"conversational",
"en",
"arxiv:2306.01708",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:nbeerbower/llama-3-gutenberg-8B",
"base_model:MaziyarPanahi/Llama-3-8B-Instruct-v0.9",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-31T12:51:52Z | ---
base_model:
- NousResearch/Hermes-2-Pro-Llama-3-8B
- nbeerbower/llama-3-gutenberg-8B
- MaziyarPanahi/Llama-3-8B-Instruct-v0.9
library_name: transformers
tags:
- mergekit
- merge
- merging
- llama3
- merged
license: llama3
language:
- en
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MaziyarPanahi/Llama-3-8B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.9) as a base.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)
* [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9
#no parameters necessary for base model
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
density: 0.5
weight: 0.8
- model: nbeerbower/llama-3-gutenberg-8B
parameters:
density: 0.5
weight: 0.8
merge_method: ties
base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9
parameters:
normalize: false
int8_mask: true
dtype: float16
``` |
mradermacher/Avenger2-11b-Passthrough-GGUF | mradermacher | 2024-06-08T04:37:40Z | 651 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"powermove72/Stealth-FusionGrit-7b-Slerp-Exp",
"powermove72/Notus-TheTop-7b-Passthrough",
"en",
"base_model:powermove72/Avenger2-11b-Passthrough",
"endpoints_compatible",
"region:us"
] | null | 2024-06-08T01:52:39Z | ---
base_model: powermove72/Avenger2-11b-Passthrough
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- powermove72/Stealth-FusionGrit-7b-Slerp-Exp
- powermove72/Notus-TheTop-7b-Passthrough
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/Avenger2-11b-Passthrough
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.Q2_K.gguf) | Q2_K | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.IQ3_XS.gguf) | IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.Q3_K_S.gguf) | Q3_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.IQ3_S.gguf) | IQ3_S | 5.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.IQ3_M.gguf) | IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.Q3_K_L.gguf) | Q3_K_L | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.IQ4_XS.gguf) | IQ4_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.Q4_K_S.gguf) | Q4_K_S | 6.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.Q4_K_M.gguf) | Q4_K_M | 6.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.Q5_K_S.gguf) | Q5_K_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.Q5_K_M.gguf) | Q5_K_M | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.Q6_K.gguf) | Q6_K | 9.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Avenger2-11b-Passthrough-GGUF/resolve/main/Avenger2-11b-Passthrough.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
inflaton/Qwen2-1.5B-MAC-gguf-f16 | inflaton | 2024-06-13T11:32:25Z | 651 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:inflaton/Qwen2-1.5B-merged_4bit_forced",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-13T02:23:26Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
base_model: inflaton/Qwen2-1.5B-merged_4bit_forced
---
# Uploaded model
- **Developed by:** inflaton
- **License:** apache-2.0
- **Finetuned from model :** inflaton/Qwen2-1.5B-merged_4bit_forced
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Haary/TinyLlama-1.1B-gguf-Unsloth | Haary | 2024-07-01T03:50:26Z | 651 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T03:46:07Z | ---
base_model: unsloth/tinyllama-chat-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Haary
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dumitrescustefan/gpt-neo-romanian-780m | dumitrescustefan | 2022-09-17T18:24:19Z | 650 | 10 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"romanian",
"text generation",
"causal lm",
"gpt-neo",
"ro",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-08-29T15:31:26Z | ---
language:
- ro
license: mit # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
tags:
- romanian
- text generation
- causal lm
- gpt-neo
---
# GPT-Neo Romanian 780M
This model is a GPT-Neo transformer decoder model designed using EleutherAI's replication of the GPT-3 architecture.
It was trained on a thoroughly cleaned corpus of Romanian text of about 40GB composed of Oscar, Opus, Wikipedia, literature and various other bits and pieces of text, joined together and deduplicated. It was trained for about a month, totaling 1.5M steps on a v3-32 TPU machine.
### Authors:
* Dumitrescu Stefan
* Mihai Ilie
### Evaluation
Evaluation to be added soon, also on [https://github.com/dumitrescustefan/Romanian-Transformers](https://github.com/dumitrescustefan/Romanian-Transformers)
### Acknowledgements
Thanks [TPU Research Cloud](https://sites.research.google/trc/about/) for the TPUv3 machine needed to train this model!
|
sensenova/piccolo-base-zh | sensenova | 2023-09-08T05:38:47Z | 650 | 26 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"mteb",
"model-index",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-09-04T07:04:26Z | ---
tags:
- mteb
model-index:
- name: piccolo-base-zh
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 49.16558217326158
- type: cos_sim_spearman
value: 51.4049475858823
- type: euclidean_pearson
value: 49.85853741070363
- type: euclidean_spearman
value: 51.501428092542234
- type: manhattan_pearson
value: 49.746099634926296
- type: manhattan_spearman
value: 51.41081804320127
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 52.385361699031854
- type: cos_sim_spearman
value: 52.59114913702212
- type: euclidean_pearson
value: 54.994530439418355
- type: euclidean_spearman
value: 52.54102886188004
- type: manhattan_pearson
value: 54.9503071669608
- type: manhattan_spearman
value: 52.51465652540901
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.236
- type: f1
value: 39.43040092463147
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 60.98952187211432
- type: cos_sim_spearman
value: 62.68189713123115
- type: euclidean_pearson
value: 61.089426749761344
- type: euclidean_spearman
value: 62.41743375544581
- type: manhattan_pearson
value: 61.14747216341409
- type: manhattan_spearman
value: 62.488918956547046
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 38.36392300667918
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 35.645927581489175
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 85.25085782849087
- type: mrr
value: 87.77154761904762
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 86.15357754080844
- type: mrr
value: 88.53547619047617
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.683
- type: map_at_10
value: 35.522999999999996
- type: map_at_100
value: 37.456
- type: map_at_1000
value: 37.576
- type: map_at_3
value: 31.584
- type: map_at_5
value: 33.684999999999995
- type: mrr_at_1
value: 36.459
- type: mrr_at_10
value: 44.534
- type: mrr_at_100
value: 45.6
- type: mrr_at_1000
value: 45.647
- type: mrr_at_3
value: 42.186
- type: mrr_at_5
value: 43.482
- type: ndcg_at_1
value: 36.459
- type: ndcg_at_10
value: 42.025
- type: ndcg_at_100
value: 49.754
- type: ndcg_at_1000
value: 51.815999999999995
- type: ndcg_at_3
value: 37.056
- type: ndcg_at_5
value: 38.962
- type: precision_at_1
value: 36.459
- type: precision_at_10
value: 9.485000000000001
- type: precision_at_100
value: 1.567
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 21.13
- type: precision_at_5
value: 15.209
- type: recall_at_1
value: 23.683
- type: recall_at_10
value: 52.190999999999995
- type: recall_at_100
value: 84.491
- type: recall_at_1000
value: 98.19600000000001
- type: recall_at_3
value: 37.09
- type: recall_at_5
value: 43.262
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 74.20324714371618
- type: cos_sim_ap
value: 82.32631646194994
- type: cos_sim_f1
value: 76.64052827073876
- type: cos_sim_precision
value: 68.58725761772854
- type: cos_sim_recall
value: 86.83656768763151
- type: dot_accuracy
value: 70.33072760072159
- type: dot_ap
value: 77.46972172609794
- type: dot_f1
value: 73.6668924804026
- type: dot_precision
value: 62.84676354029062
- type: dot_recall
value: 88.98760813654431
- type: euclidean_accuracy
value: 74.78051713770296
- type: euclidean_ap
value: 82.65778389584023
- type: euclidean_f1
value: 77.1843623157445
- type: euclidean_precision
value: 71.05211406096362
- type: euclidean_recall
value: 84.47509936871639
- type: manhattan_accuracy
value: 74.76849067949489
- type: manhattan_ap
value: 82.55694030572194
- type: manhattan_f1
value: 77.1776459569154
- type: manhattan_precision
value: 69.5423855963991
- type: manhattan_recall
value: 86.69628244096329
- type: max_accuracy
value: 74.78051713770296
- type: max_ap
value: 82.65778389584023
- type: max_f1
value: 77.1843623157445
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 72.99799999999999
- type: map_at_10
value: 81.271
- type: map_at_100
value: 81.53399999999999
- type: map_at_1000
value: 81.535
- type: map_at_3
value: 80.049
- type: map_at_5
value: 80.793
- type: mrr_at_1
value: 73.13
- type: mrr_at_10
value: 81.193
- type: mrr_at_100
value: 81.463
- type: mrr_at_1000
value: 81.464
- type: mrr_at_3
value: 80.067
- type: mrr_at_5
value: 80.741
- type: ndcg_at_1
value: 73.34
- type: ndcg_at_10
value: 84.503
- type: ndcg_at_100
value: 85.643
- type: ndcg_at_1000
value: 85.693
- type: ndcg_at_3
value: 82.135
- type: ndcg_at_5
value: 83.401
- type: precision_at_1
value: 73.34
- type: precision_at_10
value: 9.536
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 29.54
- type: precision_at_5
value: 18.398
- type: recall_at_1
value: 72.99799999999999
- type: recall_at_10
value: 94.31
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 99.789
- type: recall_at_3
value: 87.935
- type: recall_at_5
value: 90.991
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.537
- type: map_at_10
value: 81.292
- type: map_at_100
value: 84.031
- type: map_at_1000
value: 84.066
- type: map_at_3
value: 56.571000000000005
- type: map_at_5
value: 71.082
- type: mrr_at_1
value: 91.2
- type: mrr_at_10
value: 93.893
- type: mrr_at_100
value: 93.955
- type: mrr_at_1000
value: 93.95700000000001
- type: mrr_at_3
value: 93.61699999999999
- type: mrr_at_5
value: 93.767
- type: ndcg_at_1
value: 91.2
- type: ndcg_at_10
value: 88.255
- type: ndcg_at_100
value: 90.813
- type: ndcg_at_1000
value: 91.144
- type: ndcg_at_3
value: 87.435
- type: ndcg_at_5
value: 85.961
- type: precision_at_1
value: 91.2
- type: precision_at_10
value: 42.14
- type: precision_at_100
value: 4.817
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 78.467
- type: precision_at_5
value: 65.75999999999999
- type: recall_at_1
value: 26.537
- type: recall_at_10
value: 89.262
- type: recall_at_100
value: 97.783
- type: recall_at_1000
value: 99.49799999999999
- type: recall_at_3
value: 58.573
- type: recall_at_5
value: 75.154
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 48.5
- type: map_at_10
value: 57.898
- type: map_at_100
value: 58.599000000000004
- type: map_at_1000
value: 58.616
- type: map_at_3
value: 55.1
- type: map_at_5
value: 56.80500000000001
- type: mrr_at_1
value: 48.5
- type: mrr_at_10
value: 57.898
- type: mrr_at_100
value: 58.599000000000004
- type: mrr_at_1000
value: 58.616
- type: mrr_at_3
value: 55.1
- type: mrr_at_5
value: 56.80500000000001
- type: ndcg_at_1
value: 48.5
- type: ndcg_at_10
value: 62.876
- type: ndcg_at_100
value: 66.00200000000001
- type: ndcg_at_1000
value: 66.467
- type: ndcg_at_3
value: 57.162
- type: ndcg_at_5
value: 60.263999999999996
- type: precision_at_1
value: 48.5
- type: precision_at_10
value: 7.870000000000001
- type: precision_at_100
value: 0.927
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 21.032999999999998
- type: precision_at_5
value: 14.14
- type: recall_at_1
value: 48.5
- type: recall_at_10
value: 78.7
- type: recall_at_100
value: 92.7
- type: recall_at_1000
value: 96.39999999999999
- type: recall_at_3
value: 63.1
- type: recall_at_5
value: 70.7
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 44.34782608695652
- type: f1
value: 36.401426200836205
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 84.25891181988743
- type: ap
value: 50.54636280166089
- type: f1
value: 78.55080202541332
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 70.02878561337955
- type: cos_sim_spearman
value: 75.39509553139982
- type: euclidean_pearson
value: 73.92598696939956
- type: euclidean_spearman
value: 75.5471147196853
- type: manhattan_pearson
value: 73.88049486090739
- type: manhattan_spearman
value: 75.51361990583285
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 64.739
- type: map_at_10
value: 74.039
- type: map_at_100
value: 74.38
- type: map_at_1000
value: 74.39099999999999
- type: map_at_3
value: 72.074
- type: map_at_5
value: 73.29299999999999
- type: mrr_at_1
value: 66.92
- type: mrr_at_10
value: 74.636
- type: mrr_at_100
value: 74.94
- type: mrr_at_1000
value: 74.95
- type: mrr_at_3
value: 72.911
- type: mrr_at_5
value: 73.981
- type: ndcg_at_1
value: 66.92
- type: ndcg_at_10
value: 77.924
- type: ndcg_at_100
value: 79.471
- type: ndcg_at_1000
value: 79.73400000000001
- type: ndcg_at_3
value: 74.17200000000001
- type: ndcg_at_5
value: 76.236
- type: precision_at_1
value: 66.92
- type: precision_at_10
value: 9.5
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 27.989000000000004
- type: precision_at_5
value: 17.874000000000002
- type: recall_at_1
value: 64.739
- type: recall_at_10
value: 89.324
- type: recall_at_100
value: 96.342
- type: recall_at_1000
value: 98.38900000000001
- type: recall_at_3
value: 79.378
- type: recall_at_5
value: 84.28099999999999
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.97108271687962
- type: f1
value: 66.8625981386677
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.32212508406187
- type: f1
value: 73.33875034670166
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 49.0
- type: map_at_10
value: 55.022999999999996
- type: map_at_100
value: 55.550999999999995
- type: map_at_1000
value: 55.608000000000004
- type: map_at_3
value: 53.417
- type: map_at_5
value: 54.372
- type: mrr_at_1
value: 49.3
- type: mrr_at_10
value: 55.176
- type: mrr_at_100
value: 55.703
- type: mrr_at_1000
value: 55.76
- type: mrr_at_3
value: 53.567
- type: mrr_at_5
value: 54.522000000000006
- type: ndcg_at_1
value: 49.0
- type: ndcg_at_10
value: 58.089999999999996
- type: ndcg_at_100
value: 60.988
- type: ndcg_at_1000
value: 62.580999999999996
- type: ndcg_at_3
value: 54.803000000000004
- type: ndcg_at_5
value: 56.508
- type: precision_at_1
value: 49.0
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 0.8210000000000001
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.6
- type: precision_at_5
value: 12.58
- type: recall_at_1
value: 49.0
- type: recall_at_10
value: 67.80000000000001
- type: recall_at_100
value: 82.1
- type: recall_at_1000
value: 94.8
- type: recall_at_3
value: 58.8
- type: recall_at_5
value: 62.9
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 28.87237408060796
- type: mrr
value: 27.83015873015873
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 70.25
- type: f1
value: 70.29055400149645
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 65.56578234975636
- type: cos_sim_ap
value: 70.89354058570412
- type: cos_sim_f1
value: 71.21024370095002
- type: cos_sim_precision
value: 58.48032564450475
- type: cos_sim_recall
value: 91.02428722280888
- type: dot_accuracy
value: 64.86193827828912
- type: dot_ap
value: 70.17697803463875
- type: dot_f1
value: 70.68676716917922
- type: dot_precision
value: 58.57043719639139
- type: dot_recall
value: 89.1235480464625
- type: euclidean_accuracy
value: 64.86193827828912
- type: euclidean_ap
value: 70.26847152773904
- type: euclidean_f1
value: 70.9984152139461
- type: euclidean_precision
value: 56.81674064679771
- type: euclidean_recall
value: 94.61457233368532
- type: manhattan_accuracy
value: 65.40335679480238
- type: manhattan_ap
value: 70.22941558736018
- type: manhattan_f1
value: 71.09712937475423
- type: manhattan_precision
value: 56.64160401002506
- type: manhattan_recall
value: 95.45934530095037
- type: max_accuracy
value: 65.56578234975636
- type: max_ap
value: 70.89354058570412
- type: max_f1
value: 71.21024370095002
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.92999999999999
- type: ap
value: 87.16059195012956
- type: f1
value: 89.90917477839415
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 27.74161502387672
- type: cos_sim_spearman
value: 31.58353529723325
- type: euclidean_pearson
value: 32.43729673844635
- type: euclidean_spearman
value: 31.59527486602242
- type: manhattan_pearson
value: 32.37467059678786
- type: manhattan_spearman
value: 31.44408004951894
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 36.233749845501194
- type: cos_sim_spearman
value: 36.47808586229587
- type: euclidean_pearson
value: 32.663447466546806
- type: euclidean_spearman
value: 34.45830454037139
- type: manhattan_pearson
value: 32.80239212096335
- type: manhattan_spearman
value: 34.581060433895125
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 63.05131937664673
- type: cos_sim_spearman
value: 66.51353746725948
- type: euclidean_pearson
value: 61.24016998745561
- type: euclidean_spearman
value: 66.07115266049276
- type: manhattan_pearson
value: 64.55660243659054
- type: manhattan_spearman
value: 66.80282149562386
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 70.45533692882996
- type: cos_sim_spearman
value: 70.6045637565602
- type: euclidean_pearson
value: 72.75588977483554
- type: euclidean_spearman
value: 73.36630581886473
- type: manhattan_pearson
value: 72.72517409326954
- type: manhattan_spearman
value: 73.35358940437355
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 66.45779474032288
- type: mrr
value: 76.0782192023729
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.458
- type: map_at_10
value: 74.355
- type: map_at_100
value: 78.158
- type: map_at_1000
value: 78.233
- type: map_at_3
value: 52.2
- type: map_at_5
value: 64.14
- type: mrr_at_1
value: 88.37
- type: mrr_at_10
value: 91.117
- type: mrr_at_100
value: 91.231
- type: mrr_at_1000
value: 91.23599999999999
- type: mrr_at_3
value: 90.645
- type: mrr_at_5
value: 90.948
- type: ndcg_at_1
value: 88.37
- type: ndcg_at_10
value: 82.384
- type: ndcg_at_100
value: 86.431
- type: ndcg_at_1000
value: 87.163
- type: ndcg_at_3
value: 83.993
- type: ndcg_at_5
value: 82.411
- type: precision_at_1
value: 88.37
- type: precision_at_10
value: 41.131
- type: precision_at_100
value: 4.9799999999999995
- type: precision_at_1000
value: 0.515
- type: precision_at_3
value: 73.651
- type: precision_at_5
value: 61.634
- type: recall_at_1
value: 26.458
- type: recall_at_10
value: 81.3
- type: recall_at_100
value: 94.342
- type: recall_at_1000
value: 98.103
- type: recall_at_3
value: 54.020999999999994
- type: recall_at_5
value: 67.781
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 46.814
- type: f1
value: 45.580027683507666
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 61.43613064816144
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 53.01838461793776
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 59.3
- type: map_at_10
value: 69.158
- type: map_at_100
value: 69.60300000000001
- type: map_at_1000
value: 69.611
- type: map_at_3
value: 67.467
- type: map_at_5
value: 68.432
- type: mrr_at_1
value: 59.199999999999996
- type: mrr_at_10
value: 69.108
- type: mrr_at_100
value: 69.553
- type: mrr_at_1000
value: 69.56099999999999
- type: mrr_at_3
value: 67.417
- type: mrr_at_5
value: 68.382
- type: ndcg_at_1
value: 59.3
- type: ndcg_at_10
value: 73.54
- type: ndcg_at_100
value: 75.652
- type: ndcg_at_1000
value: 75.868
- type: ndcg_at_3
value: 70.074
- type: ndcg_at_5
value: 71.808
- type: precision_at_1
value: 59.3
- type: precision_at_10
value: 8.709999999999999
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 25.867
- type: precision_at_5
value: 16.36
- type: recall_at_1
value: 59.3
- type: recall_at_10
value: 87.1
- type: recall_at_100
value: 96.89999999999999
- type: recall_at_1000
value: 98.6
- type: recall_at_3
value: 77.60000000000001
- type: recall_at_5
value: 81.8
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 84.69999999999999
- type: ap
value: 66.65020528563207
- type: f1
value: 83.00542769081453
---
## piccolo-base-zh
piccolo是一个通用embedding模型(中文), 由来自商汤科技的通用模型组完成训练。piccolo借鉴了E5以及GTE的训练流程,采用了两阶段的训练方式。
在第一阶段中,我们搜集和爬取了4亿的中文文本对(可视为弱监督文本对数据),并采用二元组的softmax对比学习损失来优化模型。
在第二阶段中,我们搜集整理了2000万人工标注的中文文本对(精标数据),并采用带有难负样本的三元组的softmax对比学习损失来帮助模型更好地优化。
目前,我们提供了piccolo-base-zh和piccolo-large-zh两个模型。
piccolo is a general text embedding model(chinese), powered by General Model Group from SenseTime Research.
Inspired from E5 and GTE, piccolo is trained using a two stage pipeline. On the first stage, we collect and crawl 400 million weakly supervised Chinese text pairs from the Internet,
and train the model with the pair(text and text pos) softmax contrastive loss.
On the second stage, we collect 20 million human labeled chinese text pairs dataset, and finetune the model with tiplet (text, text_pos, text_neg) contrastive loss.
Currently here we offer two different sizes of models, including piccolo-base-zh, piccolo-large-zh.
## Metric
我们将piccolo与其他的开源embedding模型在CMTEB榜单上进行了比较,请参考CMTEB榜单。我们在eval文件夹中提供了复现结果的脚本。
We compared the performance of the piccolo with other embedding models on the C-MTEB benchmark. please refer to the C-MTEB leaderboard.
we provide scripts in "eval" folder for results reproducing.
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (35) | Classification (9) | Clustering (4) | Pair Classification (2) | Reranking (4) | Retrieval (8) | STS (8) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**piccolo-large-zh**] | 0.65 | 1024 | 512 | **64.11** | 67.03 | 47.04 | 78.38 | 65.98 | 70.93 | 58.02 |
| [bge-large-zh]| 1.3 | 1024| 512 | 63.96 | 68.32 | 48.39 | 78.94 | 65.11 | 71.52 | 54.98 |
| [**piccolo-base-zh**]| 0.2 | 768 | 512 | **63.66** | 66.98 | 47.12 | 76.61 | 66.68 | 71.2 | 55.9 |
| [bge-large-zh-no-instruct]| 1.3 | 1024 | 512 | 63.4 | 68.58 | 50.01 | 76.77 | 64.9 | 70.54 | 53 |
| [bge-base-zh]| 0.41 | 768 | 512 | 62.8 | 67.07 | 47.64 | 77.5 | 64.91 | 69.53 | 54.12 |
## Usage
在sentence-transformer package中可以很容易地调用piccolo模型
```python
# for s2s dataset, you can use piccolo as below
# 对于短对短数据集,下面是通用的使用方式
from sentence_transformers import SentenceTransformer
sentences = ["数据1", "数据2"]
model = SentenceTransformer('sensenova/piccolo-base-zh')
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p dataset, we recommend to add instruction for passage retrieval
# 对于短对长数据集,我们推荐添加instruction,来帮助模型更好地进行检索。
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["doc_1", "doc_2"]
model = SentenceTransformer('sensenova/piccolo-base-zh')
q_embeddings = model.encode(["查询:" + q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(["结果:" + p for p in passages], normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
## Training Detail
### pretrain
pretrain 通常不需要太大的max length, 推荐128。小的max length用以提高batch size,加快训练速度,从而适应大规模数据。
pretrain 损失我们采用二元组contrastive loss,不加入hard negative, 直接采用inbatch negative,在实际训练中,我们使用了32张40G A100进行训练,单卡的batch size为1024。
Pretrain usually does not require a large max length, and 128 is recommended. A small max length is used to increase batch size and speed up training to adapt to large-scale data.
We use binary contrastive loss for pretrain loss, without adding hard negative, and directly use inbatch negative. In actual training, we used 32 40G A100 for training, and the batch size of a single card is 1024.
### finetune
finetune 通常会将 max length扩增到512。用以适应更大长度的文本输入,finetune时会多sample S2P的数据,以增强模型在retrieval任务上的性能。
finetune 损失采用三元组contrastive loss,加入hard negative,neg num通常设置为2-7,loss计算方式可以参考GTE里的improved contrastive loss。
注意: 我们给query和passage设置了不同的max length,query的max length始终保持在64。
For finetuning, we usually expands the max length to 512. To adapt to larger length text input, finetune will sample more S2P data to enhance the performance of the model on retrieval tasks.
The finetune loss uses triple contrastive loss, adding hard negative. Neg num is usually set to 2-7. The loss calculation method can refer to the improved contrastive loss in GTE.
Note: We set different max lengths for query and passage, and the max length of query is always kept at 64.
### Others
一些有用的trick:
1. 减小显存的方式: fp16 + gradient checkpointing + ZERO STAGE1 (stage2 不支持双塔结构下的gradient checkpointing) 相关issue见: https://github.com/microsoft/DeepSpeed/issues/988
2. dataset sampler,我们采用了M3E的dataset sampler,用以保证每个batch里的样本均来自于一个dataset,负样本更有价值。
3. instruction。instruction在我们的实验中对retrieval任务有非常大的性能提升,我们在每个训练样本前都加入'查询: '和'结果: '这样的instruction。
some useful tricks:
1. The way to reduce memory usage: fp16 + gradient checkpointing + ZERO STAGE1 (stage2 does not support gradient checkpointing under the double-tower structure) For related issues, see: https://github.com/microsoft/DeepSpeed/issues/ 988
2. Dataset sampler, we use M3E's dataset sampler to ensure that the samples in each batch come from a dataset, and negative samples are more valuable.
3. instruction. Instruction has greatly improved the performance of the retrieval task in our experiments. We added instructions like 'query: ' and 'result: ' before each training sample.
## Reference
这里我们列出了我们参考过的embedding项目和论文
1. [M3E](https://github.com/wangyuxinwhy/uniem)。非常棒的中文开源embedding项目,收集和整理了较多的中文高质量数据集,uniem也是一个不错的框架。
2. [Text2vec](https://github.com/shibing624/text2vec)。另一个一个非常棒的中文开源embedding项目。
3. [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding)。智源AI开源的embedding模型,收集和整理了CMTEB benchmark,填补了中文embedding系统性评测的空缺。
4. [E5](https://github.com/microsoft/unilm/tree/master/e5)。来自微软的一篇文章,有非常详细的消融实验以及数据处理过滤细节。
5. [GTE](https://huggingface.co/thenlper/gte-base)。一篇来自阿里达摩的embedding论文。
Here we list the embedding projects and papers we have referenced
1. [M3E](https://github.com/wangyuxinwhy/uniem). A great Chinese open source embedding project that collects and organizes a large number of high-quality Chinese datasets. Uniem is also a good framework.
2. [Text2vec](https://github.com/shibing624/text2vec). Another great Chinese open source embedding project.
3. [Flag Embedding](https://github.com/FlagOpen/FlagEmbedding). Zhiyuan AI’s open source embedding model.They collect and organize CMTEB benchmark, filling the gap in systematic evaluation of Chinese embeddings.
4. [E5](https://github.com/microsoft/unilm/tree/master/e5). Powerd by microsoft,producing very detailed ablation experiments and data processing filtering details.
5. [GTE](https://huggingface.co/thenlper/gte-base). An embedding paper from Alibaba Damo.
## License
Piccolo 使用 MIT License,免费商用。
Piccolo use MIT License. It can be used for commercial purposes free of charge.
## Acknowledgement
piccolo 由来自商汤科技研究院的通用模型组完成训练,[Jinkin](https://huggingface.co/Jinkin) 完成了代码实现和模型训练, [Jinkin](https://huggingface.co/Jinkin),
[CCCCxxx](https://huggingface.co/CCCCxxx) 一起完成了数据搜集、整理和评测工作. 项目由 [Gaomengya](https://huggingface.co/gaomengya) 和 [chaorenwu111](https://huggingface.co/chaorenwu111) 主导。
同时,感谢[lux0933](https://huggingface.co/lux0933)以及[yangkai001](https://huggingface.co/yangkai001)的交流与讨论,提供了非常多有用的建议。
piccolo is powered by Genral Model group from SenseTime Research.
[Jinkin](https://huggingface.co/Jinkin) complete code implementation and model training.
[Jinkin](https://huggingface.co/Jinkin), [CCCCxxx](https://huggingface.co/CCCCxxx) completed the data collection、processing and model evaluation together.
Project is led by [Gaomengya](https://huggingface.co/gaomengya) and [chaorenwu111](https://huggingface.co/chaorenwu111).
At the same time, thank [lux0933](https://huggingface.co/lux0933) and [yangkai001](https://huggingface.co/yangkai001) for the discussion, which provide a lot of useful suggestions. |
ntc-ai/SDXL-LoRA-slider.micro-details-fine-details-detailed | ntc-ai | 2023-12-27T19:51:14Z | 650 | 3 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2023-12-27T19:51:12Z |
---
language:
- en
thumbnail: "images/evaluate/micro details, fine details, detailed.../micro details, fine details, detailed_17_3.0.png"
widget:
- text: micro details, fine details, detailed
output:
url: images/micro details, fine details, detailed_17_3.0.png
- text: micro details, fine details, detailed
output:
url: images/micro details, fine details, detailed_19_3.0.png
- text: micro details, fine details, detailed
output:
url: images/micro details, fine details, detailed_20_3.0.png
- text: micro details, fine details, detailed
output:
url: images/micro details, fine details, detailed_21_3.0.png
- text: micro details, fine details, detailed
output:
url: images/micro details, fine details, detailed_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "micro details, fine details, detailed"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - micro details, fine details, detailed (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/micro details, fine details, detailed_17_-3.0.png" width=256 height=256 /> | <img src="images/micro details, fine details, detailed_17_0.0.png" width=256 height=256 /> | <img src="images/micro details, fine details, detailed_17_3.0.png" width=256 height=256 /> |
| <img src="images/micro details, fine details, detailed_19_-3.0.png" width=256 height=256 /> | <img src="images/micro details, fine details, detailed_19_0.0.png" width=256 height=256 /> | <img src="images/micro details, fine details, detailed_19_3.0.png" width=256 height=256 /> |
| <img src="images/micro details, fine details, detailed_20_-3.0.png" width=256 height=256 /> | <img src="images/micro details, fine details, detailed_20_0.0.png" width=256 height=256 /> | <img src="images/micro details, fine details, detailed_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
micro details, fine details, detailed
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.micro-details-fine-details-detailed', weight_name='micro details, fine details, detailed.safetensors', adapter_name="micro details, fine details, detailed")
# Activate the LoRA
pipe.set_adapters(["micro details, fine details, detailed"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, micro details, fine details, detailed"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 670+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
RichardErkhov/openai-community_-_gpt2-gguf | RichardErkhov | 2024-05-01T22:11:23Z | 650 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-04-17T09:19:18Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2 - GGUF
- Model creator: https://huggingface.co/openai-community/
- Original model: https://huggingface.co/openai-community/gpt2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2.Q2_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q2_K.gguf) | Q2_K | 0.07GB |
| [gpt2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [gpt2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [gpt2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [gpt2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [gpt2.Q3_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q3_K.gguf) | Q3_K | 0.09GB |
| [gpt2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [gpt2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q3_K_L.gguf) | Q3_K_L | 0.09GB |
| [gpt2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.IQ4_XS.gguf) | IQ4_XS | 0.09GB |
| [gpt2.Q4_0.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q4_0.gguf) | Q4_0 | 0.1GB |
| [gpt2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [gpt2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [gpt2.Q4_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q4_K.gguf) | Q4_K | 0.1GB |
| [gpt2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q4_K_M.gguf) | Q4_K_M | 0.1GB |
| [gpt2.Q4_1.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q4_1.gguf) | Q4_1 | 0.1GB |
| [gpt2.Q5_0.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q5_0.gguf) | Q5_0 | 0.11GB |
| [gpt2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [gpt2.Q5_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q5_K.gguf) | Q5_K | 0.12GB |
| [gpt2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [gpt2.Q5_1.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q5_1.gguf) | Q5_1 | 0.12GB |
| [gpt2.Q6_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-gguf/blob/main/gpt2.Q6_K.gguf) | Q6_K | 0.13GB |
Original model description:
---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
etadevosyan/did_the_doctor_say_goodbye_to_the_patient_bert_Last128 | etadevosyan | 2024-05-21T14:36:20Z | 650 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-21T14:35:57Z | Entry not found |
mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF | mradermacher | 2024-06-12T07:24:43Z | 650 | 0 | transformers | [
"transformers",
"gguf",
"llama-3",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"base_model:OpenBuddy/openbuddy-zen-56b-v21.2-32k",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-06-11T06:21:32Z | ---
base_model: OpenBuddy/openbuddy-zen-56b-v21.2-32k
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- fi
library_name: transformers
license: other
license_link: https://llama.meta.com/llama3/license/
license_name: llama3
quantized_by: mradermacher
tags:
- llama-3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/OpenBuddy/openbuddy-zen-56b-v21.2-32k
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-IQ1_S.gguf) | i1-IQ1_S | 12.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-IQ1_M.gguf) | i1-IQ1_M | 13.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 15.3 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 17.0 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-IQ2_S.gguf) | i1-IQ2_S | 17.9 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-IQ2_M.gguf) | i1-IQ2_M | 19.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-Q2_K.gguf) | i1-Q2_K | 21.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 22.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 24.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-IQ3_S.gguf) | i1-IQ3_S | 24.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-IQ3_M.gguf) | i1-IQ3_M | 25.7 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 27.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 30.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 30.5 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-Q4_0.gguf) | i1-Q4_0 | 32.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 32.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 34.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 39.2 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 40.2 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zen-56b-v21.2-32k-i1-GGUF/resolve/main/openbuddy-zen-56b-v21.2-32k.i1-Q6_K.gguf) | i1-Q6_K | 46.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
NESPED-GEN/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch | NESPED-GEN | 2024-06-24T00:03:40Z | 650 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-22T20:41:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nalf3in2/Llama-3-Instruct-8B-SPPO-Iter3-Q6_K-GGUF | nalf3in2 | 2024-06-26T15:02:30Z | 650 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:openbmb/UltraFeedback",
"base_model:UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-06-26T15:01:56Z | ---
base_model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
datasets:
- openbmb/UltraFeedback
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# nalf3in2/Llama-3-Instruct-8B-SPPO-Iter3-Q6_K-GGUF
This model was converted to GGUF format from [`UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3`](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nalf3in2/Llama-3-Instruct-8B-SPPO-Iter3-Q6_K-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nalf3in2/Llama-3-Instruct-8B-SPPO-Iter3-Q6_K-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nalf3in2/Llama-3-Instruct-8B-SPPO-Iter3-Q6_K-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nalf3in2/Llama-3-Instruct-8B-SPPO-Iter3-Q6_K-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q6_k.gguf -c 2048
```
|
Helsinki-NLP/opus-mt-ts-en | Helsinki-NLP | 2023-08-16T12:07:32Z | 649 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ts",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ts-en
* source languages: ts
* target languages: en
* OPUS readme: [ts-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ts.en | 44.0 | 0.590 |
|
facebook/incoder-1B | facebook | 2023-01-24T17:06:37Z | 649 | 38 | transformers | [
"transformers",
"pytorch",
"xglm",
"text-generation",
"code",
"python",
"javascript",
"arxiv:2204.05999",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-04-08T15:40:08Z | ---
license: "cc-by-nc-4.0"
tags:
- code
- python
- javascript
---
# InCoder 1B
A 1B parameter decoder-only Transformer model trained on code using a causal-masked objective, which allows inserting/infilling code as well as standard left-to-right generation.
The model was trained on public open-source repositories with a permissive, non-copyleft, license (Apache 2.0, MIT, BSD-2 or BSD-3) from GitHub and GitLab, as well as StackOverflow. Repositories primarily contained Python and JavaScript, but also include code from 28 languages, as well as StackOverflow.
For more information, see our:
- [Demo](https://huggingface.co/spaces/facebook/incoder-demo)
- [Project site](https://sites.google.com/view/incoder-code-models)
- [Examples](https://sites.google.com/view/incoder-code-models/home/examples)
- [Paper](https://arxiv.org/abs/2204.05999)
A larger, 6B, parameter model is also available at [facebook/incoder-6B](https://huggingface.co/facebook/incoder-6B).
## Requirements
`pytorch`, `tokenizers`, and `transformers`. Our model requires HF's tokenizers >= 0.12.1, due to changes in the pretokenizer.
```
pip install torch
pip install "tokenizers>=0.12.1"
pip install transformers
```
## Usage
See [https://github.com/dpfried/incoder](https://github.com/dpfried/incoder) for example code.
### Model
`model = AutoModelForCausalLM.from_pretrained("facebook/incoder-1B")`
### Tokenizer
`tokenizer = AutoTokenizer.from_pretrained("facebook/incoder-1B")`
(Note: the incoder-1B and incoder-6B tokenizers are identical, so 'facebook/incoder-6B' could also be used.)
When calling `tokenizer.decode`, it's important to pass `clean_up_tokenization_spaces=False` to avoid removing spaces after punctuation. For example:
`tokenizer.decode(tokenizer.encode("from ."), clean_up_tokenization_spaces=False)`
(Note: encoding prepends the `<|endoftext|>` token, as this marks the start of a document to our model. This token can be removed from the decoded output by passing `skip_special_tokens=True` to `tokenizer.decode`.)
## License
CC-BY-NC 4.0
## Credits
The model was developed by Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer and Mike Lewis.
Thanks to Lucile Saulnier, Leandro von Werra, Nicolas Patry, Suraj Patil, Omar Sanseviero, and others at HuggingFace for help with the model release, and to Naman Goyal and Stephen Roller for the code our demo was based on! |
lllyasviel/sd-controlnet-hed | lllyasviel | 2023-04-24T22:30:38Z | 649 | 25 | diffusers | [
"diffusers",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | 2023-02-24T07:02:21Z | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- image-to-image
---
# Controlnet - *HED Boundary Version*
ControlNet is a neural network structure to control diffusion models by adding extra conditions.
This checkpoint corresponds to the ControlNet conditioned on **HED Boundary**.
It can be used in combination with [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/text2img).

## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Released Checkpoints
The authors released 8 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|---|---|---|---|
|[lllyasviel/sd-controlnet-canny](https://huggingface.co/lllyasviel/sd-controlnet-canny)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_canny.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"/></a>|
|[lllyasviel/sd-controlnet-depth](https://huggingface.co/lllyasviel/sd-controlnet-depth)<br/> *Trained with Midas depth estimation* |A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_depth.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_depth.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"/></a>|
|[lllyasviel/sd-controlnet-hed](https://huggingface.co/lllyasviel/sd-controlnet-hed)<br/> *Trained with HED edge detection (soft edge)* |A monochrome image with white soft edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_hed.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_hed.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"/></a> |
|[lllyasviel/sd-controlnet-mlsd](https://huggingface.co/lllyasviel/sd-controlnet-mlsd)<br/> *Trained with M-LSD line detection* |A monochrome image composed only of white straight lines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_mlsd.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_mlsd.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"/></a>|
|[lllyasviel/sd-controlnet-normal](https://huggingface.co/lllyasviel/sd-controlnet-normal)<br/> *Trained with normal map* |A [normal mapped](https://en.wikipedia.org/wiki/Normal_mapping) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_normal.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_normal.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"/></a>|
|[lllyasviel/sd-controlnet_openpose](https://huggingface.co/lllyasviel/sd-controlnet-openpose)<br/> *Trained with OpenPose bone image* |A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_openpose.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"/></a>|
|[lllyasviel/sd-controlnet_scribble](https://huggingface.co/lllyasviel/sd-controlnet-scribble)<br/> *Trained with human scribbles* |A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_scribble.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_scribble.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"/></a> |
|[lllyasviel/sd-controlnet_seg](https://huggingface.co/lllyasviel/sd-controlnet-seg)<br/>*Trained with semantic segmentation* |An [ADE20K](https://groups.csail.mit.edu/vision/datasets/ADE20K/)'s segmentation protocol image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_seg.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_seg.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"/></a> |
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
**Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:
1. Install https://github.com/patrickvonplaten/controlnet_aux
```sh
$ pip install controlnet_aux
```
2. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
3. Run code:
```py
from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
from controlnet_aux import HEDdetector
from diffusers.utils import load_image
hed = HEDdetector.from_pretrained('lllyasviel/ControlNet')
image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-hed/resolve/main/images/man.png")
image = hed(image)
controlnet = ControlNetModel.from_pretrained(
"lllyasviel/sd-controlnet-hed", torch_dtype=torch.float16
)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# Remove if you do not have xformers installed
# see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers
# for installation instructions
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
image = pipe("oil painting of handsome old man, masterpiece", image, num_inference_steps=20).images[0]
image.save('images/man_hed_out.png')
```



### Training
The HED Edge model was trained on 3M edge-image, caption pairs. The model was trained for 600 GPU-hours with Nvidia A100 80G using Stable Diffusion 1.5 as a base model.
### Blog post
For more information, please also have a look at the [official ControlNet Blog Post](https://huggingface.co/blog/controlnet). |
timm/mobilevitv2_075.cvnets_in1k | timm | 2023-04-24T22:23:58Z | 649 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2206.02680",
"license:other",
"region:us"
] | image-classification | 2023-04-24T22:23:48Z | ---
tags:
- image-classification
- timm
library_name: timm
license: other
datasets:
- imagenet-1k
---
# Model card for mobilevitv2_075.cvnets_in1k
A MobileViT-v2 image classification model. Trained on ImageNet-1k by paper authors.
See license details at https://github.com/apple/ml-cvnets/blob/main/LICENSE
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 2.9
- GMACs: 1.1
- Activations (M): 12.1
- Image size: 256 x 256
- **Papers:**
- Separable Self-attention for Mobile Vision Transformers: https://arxiv.org/abs/2206.02680
- **Original:** https://github.com/apple/ml-cvnets
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('mobilevitv2_075.cvnets_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilevitv2_075.cvnets_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 48, 128, 128])
# torch.Size([1, 96, 64, 64])
# torch.Size([1, 192, 32, 32])
# torch.Size([1, 288, 16, 16])
# torch.Size([1, 384, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'mobilevitv2_075.cvnets_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 384, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Mehta2022SeparableSF,
title={Separable Self-attention for Mobile Vision Transformers},
author={Sachin Mehta and Mohammad Rastegari},
journal={ArXiv},
year={2022},
volume={abs/2206.02680}
}
```
|
TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF | TheBloke | 2023-09-27T12:46:41Z | 649 | 7 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"base_model:Sao10K/Mythical-Destroyer-V2-L2-13B",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2023-08-30T09:30:48Z | ---
language:
- en
license: llama2
model_name: Mythical Destroyer V2 L2 13B
base_model: Sao10K/Mythical-Destroyer-V2-L2-13B
inference: false
model_creator: Sao10K
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mythical Destroyer V2 L2 13B - GGUF
- Model creator: [Sao10K](https://huggingface.co/Sao10K)
- Original model: [Mythical Destroyer V2 L2 13B](https://huggingface.co/Sao10K/Mythical-Destroyer-V2-L2-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Sao10K's Mythical Destroyer V2 L2 13B](https://huggingface.co/Sao10K/Mythical-Destroyer-V2-L2-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF)
* [Sao10K's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Sao10K/Mythical-Destroyer-V2-L2-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mythical-destroyer-v2-l2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF/blob/main/mythical-destroyer-v2-l2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [mythical-destroyer-v2-l2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF/blob/main/mythical-destroyer-v2-l2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [mythical-destroyer-v2-l2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF/blob/main/mythical-destroyer-v2-l2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [mythical-destroyer-v2-l2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF/blob/main/mythical-destroyer-v2-l2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [mythical-destroyer-v2-l2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF/blob/main/mythical-destroyer-v2-l2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mythical-destroyer-v2-l2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF/blob/main/mythical-destroyer-v2-l2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [mythical-destroyer-v2-l2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF/blob/main/mythical-destroyer-v2-l2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [mythical-destroyer-v2-l2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF/blob/main/mythical-destroyer-v2-l2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mythical-destroyer-v2-l2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF/blob/main/mythical-destroyer-v2-l2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [mythical-destroyer-v2-l2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF/blob/main/mythical-destroyer-v2-l2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [mythical-destroyer-v2-l2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF/blob/main/mythical-destroyer-v2-l2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [mythical-destroyer-v2-l2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF/blob/main/mythical-destroyer-v2-l2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF and below it, a specific filename to download, such as: mythical-destroyer-v2-l2-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF mythical-destroyer-v2-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF mythical-destroyer-v2-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m mythical-destroyer-v2-l2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Mythical-Destroyer-V2-L2-13B-GGUF", model_file="mythical-destroyer-v2-l2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Sao10K's Mythical Destroyer V2 L2 13B
<br>A Merge done for @dampf
**FULL FP16 Model**
**V2 Model**
<br>Changelog:
<br>REMOVED - Llama-2-13B-Chat-fp16 (reason: censored, likely amplified base model quirks)
<br>ADDED - jondurbin/airoboros-l2-13b-2.1 (ghost attention, improved RP and instruction)
<br>Base Model [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16)
<br> **MERGED WITH**
<br>-----[Gryphe/MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b)
<br>-----[totally-not-an-llm/PuddleJumper-13b](https://huggingface.co/totally-not-an-llm/PuddleJumper-13b)
<br>-----[jondurbin/airoboros-l2-13b-2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1)
<br>-----[rombodawg/LosslessMegaCoder-llama2-13b-mini](https://huggingface.co/rombodawg/LosslessMegaCoder-llama2-13b-mini)
<br>-----[The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
<br>*using ties-merge*
```
Dampf's Rationale:
I did receive feedback from some users that it likes to add notes and morality to erp stories.
i will kick llama 2 chat and make an uncensored V2 version
in llama 2 chat's place will be the freshly released airboros 2.1
---
well it was not bad, it was just censored because of llama 2 13b chat
i guess charles was really serious about each model retaining its shape
i was expecting parts of it to get watered down, but judging from the strong influence of llama chat that wasn't the case
```
Alpaca should be its main format, but also can be used with others. Vicuna 1.1 should work well too.
```
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
LIMITATIONS:
While some of the issues of V1 have been fixed, there are some issues left that makes the model not very useable in certain scenarios such as roleplaying. The model explains actions and breaks character regularly.
Update: I've found out this was largely due to SillyTavern's formatting. If you are using SillyTavern, make sure to disable example chats formatting and chat start formatting.
<br>Script used to Merge [here](https://github.com/cg123/ties-merge)
<br>Thank you for the easy to set up script, [Chargoddard](https://huggingface.co/chargoddard). Also I want to thank all these hard working model creators for their contributions to the Open Source community!
Command:
```
python ties_merge.py TheBloke/Llama-2-13B-fp16 ./Mythical-Destroyer-V2-13B --merge Gryphe/MythoMax-L2-13b --merge totally-not-an-llm/PuddleJumper-13b --merge jondurbin/airoboros-l2-13b-2.1 --merge rombodawg/LosslessMegaCoder-llama2-13b-mini --merge The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16 --cuda
```
<!-- original-model-card end -->
|
Chrisisis/5ED9EcFBnuBdA5J9iXDc7JAS1FnN4CVSSG6gDJiNwyFdotm_vgg | Chrisisis | 2024-02-24T08:33:15Z | 649 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-02-19T03:00:11Z | Entry not found |
Buseak/spellcorrector_20_02_050_qwerty_v13 | Buseak | 2024-02-25T13:19:46Z | 649 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"canine",
"token-classification",
"generated_from_trainer",
"base_model:Buseak/spellcorrector_20_02_050_qwerty_v11",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-25T10:57:59Z | ---
license: apache-2.0
base_model: Buseak/spellcorrector_20_02_050_qwerty_v11
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: spellcorrector_20_02_050_qwerty_v13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spellcorrector_20_02_050_qwerty_v13
This model is a fine-tuned version of [Buseak/spellcorrector_20_02_050_qwerty_v11](https://huggingface.co/Buseak/spellcorrector_20_02_050_qwerty_v11) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Precision: 1.0
- Recall: 0.9995
- F1: 0.9997
- Accuracy: 1.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0398 | 1.0 | 967 | 0.0161 | 0.9941 | 0.9898 | 0.9919 | 0.9953 |
| 0.0227 | 2.0 | 1934 | 0.0103 | 0.9952 | 0.9919 | 0.9935 | 0.9970 |
| 0.017 | 3.0 | 2901 | 0.0072 | 0.9936 | 0.9957 | 0.9946 | 0.9978 |
| 0.0126 | 4.0 | 3868 | 0.0048 | 0.9978 | 0.9946 | 0.9962 | 0.9986 |
| 0.0104 | 5.0 | 4835 | 0.0034 | 0.9989 | 0.9973 | 0.9981 | 0.9990 |
| 0.0082 | 6.0 | 5802 | 0.0022 | 0.9989 | 0.9979 | 0.9984 | 0.9993 |
| 0.0064 | 7.0 | 6769 | 0.0015 | 0.9989 | 0.9973 | 0.9981 | 0.9996 |
| 0.0055 | 8.0 | 7736 | 0.0013 | 0.9995 | 0.9989 | 0.9992 | 0.9996 |
| 0.0042 | 9.0 | 8703 | 0.0007 | 0.9989 | 0.9979 | 0.9984 | 0.9998 |
| 0.0035 | 10.0 | 9670 | 0.0004 | 0.9995 | 0.9989 | 0.9992 | 0.9999 |
| 0.0032 | 11.0 | 10637 | 0.0003 | 0.9995 | 0.9989 | 0.9992 | 0.9999 |
| 0.0025 | 12.0 | 11604 | 0.0002 | 0.9995 | 0.9989 | 0.9992 | 1.0000 |
| 0.0022 | 13.0 | 12571 | 0.0002 | 0.9995 | 0.9989 | 0.9992 | 1.0000 |
| 0.002 | 14.0 | 13538 | 0.0001 | 1.0 | 0.9989 | 0.9995 | 1.0000 |
| 0.0015 | 15.0 | 14505 | 0.0001 | 0.9995 | 1.0 | 0.9997 | 1.0000 |
| 0.0013 | 16.0 | 15472 | 0.0001 | 0.9995 | 1.0 | 0.9997 | 1.0000 |
| 0.0012 | 17.0 | 16439 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0000 |
| 0.001 | 18.0 | 17406 | 0.0000 | 1.0 | 0.9995 | 0.9997 | 1.0000 |
| 0.0009 | 19.0 | 18373 | 0.0000 | 1.0 | 0.9995 | 0.9997 | 1.0000 |
| 0.0009 | 20.0 | 19340 | 0.0000 | 1.0 | 0.9995 | 0.9997 | 1.0000 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
fluently/Fluently-XL-v3-Lightning | fluently | 2024-05-24T21:22:05Z | 649 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"sdxl",
"lightning",
"fluetnly-xl",
"fluently",
"trained",
"text-to-image",
"dataset:ehristoforu/midjourney-images",
"dataset:ehristoforu/dalle-3-images",
"dataset:ehristoforu/fav_images",
"base_model:ByteDance/SDXL-Lightning",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-03-24T12:49:20Z | ---
license: other
license_name: fluently-license
license_link: https://huggingface.co/spaces/fluently/License
datasets:
- ehristoforu/midjourney-images
- ehristoforu/dalle-3-images
- ehristoforu/fav_images
library_name: diffusers
pipeline_tag: text-to-image
base_model: ByteDance/SDXL-Lightning
tags:
- safetensors
- stable-diffusion
- sdxl
- lightning
- fluetnly-xl
- fluently
- trained
inference:
parameters:
num_inference_steps: 5
guidance_scale: 2
negative_prompt: "(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation"
---
# **Fluently XL** V3 (Lightning-4Steps) - the best XL-model

[>>> Run in **RunDiffusion** <<<](https://civitai.com/api/run/408626?partnerId=1&strategyId=1099208953)
*This model is super-fast and can generate high quality images.*
Introducing Fluently XL, you are probably ready to argue with the name of the model: “The best XL-model”, but now I will prove to you why it is true.
## About this model
The model was obtained through training on *expensive graphics accelerators*, a lot of work was done, now we will show why this XL model is better than others.
### Features
- Correct anatomy
- Art and realism in one
- Controling contrast
- Great nature
- Great faces without AfterDetailer
### More info
Our model is better than others because we do not mix but **train**, but at first it may seem that the model is not very good, but if you are a real professional you will like it.
## Using
Optimal parameters in Automatic1111/ComfyUI:
- Sampling steps: 4-6
- Sampler method: DPM++ SDE
- CFG Scale: 1.5-2
## End
Let's remove models that copy each other from the top and put one that is actually developing, thank you) |
OwenArli/Llama-3-8B-Cumulus-v0.1 | OwenArli | 2024-05-02T01:08:28Z | 649 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-30T08:04:07Z | ---
license: llama3
---
Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
Mostly an experiment trying to completely uncensor the model, doesn't seem to be near as good as the original in reasoning and knowledge. It is however pretty good for RP.
Will soon have quants uploaded here on HF and have it up on https://awanllm.com for anyone to try.
Training:
- 4096 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
- Training duration is around 3 days on an RTX 4090, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights.
Instruct format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Quants:
GGUF: https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF
|
OwenArli/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1 | OwenArli | 2024-05-02T03:55:00Z | 649 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-02T00:54:18Z | ---
license: llama3
---
Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
ORPO fine tuning method using the following datasets:
- https://huggingface.co/datasets/Intel/orca_dpo_pairs
- https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo
- https://huggingface.co/datasets/unalignment/toxic-dpo-v0.2
- https://huggingface.co/datasets/M4-ai/prm_dpo_pairs_cleaned
- https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1
Despite the toxic datasets to reduce refusals, this model is still relatively safe but refuses less than the original Meta model.
As of now ORPO fine tuning seems to improve some metrics while reducing other metrics by a lot:

We are happy for anyone to try it out and give some feedback and we will have the model up on https://awanllm.com if it is popular.
Instruct format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Quants: |
janhq/llama3 | janhq | 2024-05-24T04:06:20Z | 649 | 0 | null | [
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2024-05-15T09:35:53Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: llama3
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity\u2019s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta\u2019s proprietary Meta Llama 3 and\
\ Documentation (and any portion thereof) made available under this Agreement.\n\
\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or,\
\ if you are an entity, your principal place of business is in the EEA or Switzerland)\
\ and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta\u2019s intellectual property or other rights owned by Meta embodied\
\ in the Llama Materials to use, reproduce, distribute, copy, create derivative\
\ works of, and make modifications to the Llama Materials.\nb. Redistribution and\
\ Use.\ni. If you distribute or make available the Llama Materials (or any derivative\
\ works thereof), or a product or service that uses any of them, including another\
\ AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display \u201CBuilt with Meta Llama 3\u201D on a related website,\
\ user interface, blogpost, about page, or product documentation. If you use the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include \u201CLlama 3\u201D\
\ at the beginning of any such AI model name.\nii. If you receive Llama Materials,\
\ or any derivative works thereof, from a Licensee as part of an integrated end\
\ user product, then Section 2 of this Agreement will not apply to you.\niii. You\
\ must retain in all copies of the Llama Materials that you distribute the following\
\ attribution notice within a \u201CNotice\u201D text file distributed as a part\
\ of such copies: \u201CMeta Llama 3 is licensed under the Meta Llama 3 Community\
\ License, Copyright \xA9 Meta Platforms, Inc. All Rights Reserved.\u201D\niv. Your\
\ use of the Llama Materials must comply with applicable laws and regulations (including\
\ trade compliance laws and regulations) and adhere to the Acceptable Use Policy\
\ for the Llama Materials (available at https://llama.meta.com/llama3/use-policy),\
\ which is hereby incorporated by reference into this Agreement.\nv. You will not\
\ use the Llama Materials or any output or results of the Llama Materials to improve\
\ any other large language model (excluding Meta Llama 3 or derivative works thereof).\n\
2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the\
\ monthly active users of the products or services made available by or for Licensee,\
\ or Licensee\u2019s affiliates, is greater than 700 million monthly active users\
\ in the preceding calendar month, you must request a license from Meta, which Meta\
\ may grant to you in its sole discretion, and you are not authorized to exercise\
\ any of the rights under this Agreement unless or until Meta otherwise expressly\
\ grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE\
\ LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON\
\ AN \u201CAS IS\u201D BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required for\
\ reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ \u201CLlama 3\u201D (the \u201CMark\u201D) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta\u2019s brand guidelines\
\ (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\
\ ). All goodwill arising out of your use of the Mark will inure to the benefit\
\ of Meta.\nb. Subject to Meta\u2019s ownership of Llama Materials and derivatives\
\ made by or for Meta, with respect to any derivative works and modifications of\
\ the Llama Materials that are made by you, as between you and Meta, you are and\
\ will be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama\
\ 3 outputs or results, or any portion of any of the foregoing, constitutes infringement\
\ of intellectual property or other rights owned or licensable by you, then any\
\ licenses granted to you under this Agreement shall terminate as of the date such\
\ litigation or claim is filed or instituted. You will indemnify and hold harmless\
\ Meta from and against any claim by any third party arising out of or related to\
\ your use or distribution of the Llama Materials.\n6. Term and Termination. The\
\ term of this Agreement will commence upon your acceptance of this Agreement or\
\ access to the Llama Materials and will continue in full force and effect until\
\ terminated in accordance with the terms and conditions herein. Meta may terminate\
\ this Agreement if you are in breach of any term or condition of this Agreement.\
\ Upon termination of this Agreement, you shall delete and cease use of the Llama\
\ Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n\
7. Governing Law and Jurisdiction. This Agreement will be governed and construed\
\ under the laws of the State of California without regard to choice of law principles,\
\ and the UN Convention on Contracts for the International Sale of Goods does not\
\ apply to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (\u201CPolicy\u201D). The most recent copy of this policy can be found\
\ at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others\u2019 rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software \u201Cbug,\u201D or other problems that could lead to\
\ a violation of this Policy through one of the following means:\n * Reporting\
\ issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
widget:
- example_title: Hello
messages:
- role: user
content: Hey my name is Julien! How are you?
- example_title: Winter holidays
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and
truthfully.
- role: user
content: Can you recommend a good destination for Winter holidays?
- example_title: Programming assistant
messages:
- role: system
content: You are a helpful and honest code and programming assistant. Please,
respond concisely and truthfully.
- role: user
content: Write a function that computes the nth fibonacci number.
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model_creator: meta-llama
model_name: Meta-Llama-3-8B-Instruct
quantized_by: JanHQ
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This is a GGUF version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- Model creator: [meta-llama](https://huggingface.co/meta-llama)
- Original model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
- Model description: [Readme](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/README.md)
# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Converter
This is a repository for the [open-source converter](https://github.com/janhq/model-converter. We would be grateful if the community could contribute and strengthen this repository. We are aiming to expand the repo that can convert into various types of format
|
RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf | RichardErkhov | 2024-06-05T20:48:19Z | 649 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-05T20:36:46Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
distilgpt2-wiki-qa - GGUF
- Model creator: https://huggingface.co/XBOT-RK/
- Original model: https://huggingface.co/XBOT-RK/distilgpt2-wiki-qa/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [distilgpt2-wiki-qa.Q2_K.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q2_K.gguf) | Q2_K | 0.06GB |
| [distilgpt2-wiki-qa.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.IQ3_XS.gguf) | IQ3_XS | 0.07GB |
| [distilgpt2-wiki-qa.IQ3_S.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.IQ3_S.gguf) | IQ3_S | 0.07GB |
| [distilgpt2-wiki-qa.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q3_K_S.gguf) | Q3_K_S | 0.07GB |
| [distilgpt2-wiki-qa.IQ3_M.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.IQ3_M.gguf) | IQ3_M | 0.07GB |
| [distilgpt2-wiki-qa.Q3_K.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q3_K.gguf) | Q3_K | 0.07GB |
| [distilgpt2-wiki-qa.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q3_K_M.gguf) | Q3_K_M | 0.07GB |
| [distilgpt2-wiki-qa.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q3_K_L.gguf) | Q3_K_L | 0.07GB |
| [distilgpt2-wiki-qa.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.IQ4_XS.gguf) | IQ4_XS | 0.07GB |
| [distilgpt2-wiki-qa.Q4_0.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q4_0.gguf) | Q4_0 | 0.08GB |
| [distilgpt2-wiki-qa.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.IQ4_NL.gguf) | IQ4_NL | 0.08GB |
| [distilgpt2-wiki-qa.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q4_K_S.gguf) | Q4_K_S | 0.08GB |
| [distilgpt2-wiki-qa.Q4_K.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q4_K.gguf) | Q4_K | 0.08GB |
| [distilgpt2-wiki-qa.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q4_K_M.gguf) | Q4_K_M | 0.08GB |
| [distilgpt2-wiki-qa.Q4_1.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q4_1.gguf) | Q4_1 | 0.08GB |
| [distilgpt2-wiki-qa.Q5_0.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q5_0.gguf) | Q5_0 | 0.09GB |
| [distilgpt2-wiki-qa.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q5_K_S.gguf) | Q5_K_S | 0.09GB |
| [distilgpt2-wiki-qa.Q5_K.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q5_K.gguf) | Q5_K | 0.09GB |
| [distilgpt2-wiki-qa.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q5_K_M.gguf) | Q5_K_M | 0.09GB |
| [distilgpt2-wiki-qa.Q5_1.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q5_1.gguf) | Q5_1 | 0.09GB |
| [distilgpt2-wiki-qa.Q6_K.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q6_K.gguf) | Q6_K | 0.1GB |
| [distilgpt2-wiki-qa.Q8_0.gguf](https://huggingface.co/RichardErkhov/XBOT-RK_-_distilgpt2-wiki-qa-gguf/blob/main/distilgpt2-wiki-qa.Q8_0.gguf) | Q8_0 | 0.12GB |
Original model description:
---
language:
- en
tags:
- gpt2
license: mit
datasets:
- wiki_qa
# pipeline_tag: conversational
inference: false
# widget:
# - text: 'What are Glaciers?'
---
## Description
This Question-Answering model was fine-tuned & trained from a generative, left-to-right transformer in the style of GPT-2, the [distilgpt2](https://huggingface.co/distilgpt2) model. This model was trained on [Wiki-QA](https://huggingface.co/datasets/wiki_qa) dataset from Microsoft.
# How to run XBOT-RK/Distil-GPT2-Wiki-QA using Transformers
## Question-Answering
The following code shows how to use the Distil-GPT2-Wiki-QA checkpoint and Transformers to generate Answers.
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
import re
tokenizer = GPT2Tokenizer.from_pretrained("XBOT-RK/distilgpt2-wiki-qa")
model = GPT2LMHeadModel.from_pretrained("XBOT-RK/distilgpt2-wiki-qa")
device = "cuda" if torch.cuda.is_available() else "cpu"
def infer(question):
generated_tensor = model.generate(**tokenizer(question, return_tensors="pt").to(device), max_new_tokens = 50)
generated_text = tokenizer.decode(generated_tensor[0])
return generated_text
def processAnswer(question, result):
answer = result.replace(question, '').strip()
if "<bot>:" in answer:
answer = re.search('<bot>:(.*)', answer).group(1).strip()
if "<endofstring>" in answer:
answer = re.search('(.*)<endofstring>', answer).group(1).strip()
return answer
question = "What is a tropical cyclone?"
result = infer(question)
answer = processAnswer(question, result)
print('Question: ', question)
print('Answer: ', answer)
# Output
"Question: What is a tropical cyclone?"
"Answer: The cyclone is named after the climber Edmond Halley, who described it as the 'most powerful cyclone of the Atlantic'."
```
|
John6666/wai-real-cn-v4-sdxl | John6666 | 2024-06-11T09:36:49Z | 649 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"pony",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-06-11T09:32:08Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- pony
---
Original model is [here](https://civitai.com/models/469902/wai-realcn?modelVersionId=563951).
|
Omartificial-Intelligence-Space/Marbert-all-nli-triplet-Matryoshka | Omartificial-Intelligence-Space | 2024-06-26T20:30:28Z | 649 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"mteb",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"ar",
"dataset:Omartificial-Intelligence-Space/Arabic-NLi-Triplet",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:UBC-NLP/MARBERTv2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2024-06-17T11:23:10Z | ---
language:
- ar
library_name: sentence-transformers
tags:
- mteb
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: UBC-NLP/MARBERTv2
datasets:
- Omartificial-Intelligence-Space/Arabic-NLi-Triplet
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
widget:
- source_sentence: ذكر متوازن بعناية يقف على قدم واحدة بالقرب من منطقة شاطئ المحيط النظيفة
sentences:
- رجل يقدم عرضاً
- هناك رجل بالخارج قرب الشاطئ
- رجل يجلس على أريكه
- source_sentence: رجل يقفز إلى سريره القذر
sentences:
- السرير قذر.
- رجل يضحك أثناء غسيل الملابس
- الرجل على القمر
- source_sentence: الفتيات بالخارج
sentences:
- امرأة تلف الخيط إلى كرات بجانب كومة من الكرات
- فتيان يركبان في جولة متعة
- >-
ثلاث فتيات يقفون سوية في غرفة واحدة تستمع وواحدة تكتب على الحائط والثالثة
تتحدث إليهن
- source_sentence: الرجل يرتدي قميصاً أزرق.
sentences:
- >-
رجل يرتدي قميصاً أزرق يميل إلى الجدار بجانب الطريق مع شاحنة زرقاء وسيارة
حمراء مع الماء في الخلفية.
- كتاب القصص مفتوح
- رجل يرتدي قميص أسود يعزف على الجيتار.
- source_sentence: يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.
sentences:
- ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه
- رجل يستلقي على وجهه على مقعد في الحديقة.
- الشاب نائم بينما الأم تقود ابنتها إلى الحديقة
pipeline_tag: sentence-similarity
model-index:
- name: Omartificial-Intelligence-Space/Marbert-all-nli-triplet-Matryoshka
results:
- dataset:
config: default
name: MTEB BIOSSES (default)
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: cosine_pearson
value: 49.25240527202211
- type: cosine_spearman
value: 51.87708566904703
- type: euclidean_pearson
value: 49.790877425774696
- type: euclidean_spearman
value: 51.725274981021855
- type: main_score
value: 51.87708566904703
- type: manhattan_pearson
value: 52.31560776967401
- type: manhattan_spearman
value: 54.28979124658997
task:
type: STS
- dataset:
config: default
name: MTEB SICK-R (default)
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
split: test
type: mteb/sickr-sts
metrics:
- type: cosine_pearson
value: 65.81089479351829
- type: cosine_spearman
value: 65.80163441928238
- type: euclidean_pearson
value: 65.2718874370746
- type: euclidean_spearman
value: 65.92429031695988
- type: main_score
value: 65.80163441928238
- type: manhattan_pearson
value: 65.28701419332383
- type: manhattan_spearman
value: 65.94229793651319
task:
type: STS
- dataset:
config: default
name: MTEB STS12 (default)
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: cosine_pearson
value: 65.11346939995998
- type: cosine_spearman
value: 63.00297824477175
- type: euclidean_pearson
value: 63.85320097970942
- type: euclidean_spearman
value: 63.25151047701848
- type: main_score
value: 63.00297824477175
- type: manhattan_pearson
value: 64.40291990853984
- type: manhattan_spearman
value: 63.63497232399945
task:
type: STS
- dataset:
config: default
name: MTEB STS13 (default)
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: cosine_pearson
value: 52.2735823521702
- type: cosine_spearman
value: 52.23198766098021
- type: euclidean_pearson
value: 54.12467577456837
- type: euclidean_spearman
value: 52.40014028261351
- type: main_score
value: 52.23198766098021
- type: manhattan_pearson
value: 54.38052509834607
- type: manhattan_spearman
value: 52.70836595958237
task:
type: STS
- dataset:
config: default
name: MTEB STS14 (default)
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: cosine_pearson
value: 58.55307076840419
- type: cosine_spearman
value: 59.2261024017655
- type: euclidean_pearson
value: 59.55734715751804
- type: euclidean_spearman
value: 60.135899681574834
- type: main_score
value: 59.2261024017655
- type: manhattan_pearson
value: 59.99274396356966
- type: manhattan_spearman
value: 60.44325356503041
task:
type: STS
- dataset:
config: default
name: MTEB STS15 (default)
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: cosine_pearson
value: 68.94418532602707
- type: cosine_spearman
value: 70.01912156519296
- type: euclidean_pearson
value: 71.67028435860581
- type: euclidean_spearman
value: 71.48252471922122
- type: main_score
value: 70.01912156519296
- type: manhattan_pearson
value: 71.9587452337792
- type: manhattan_spearman
value: 71.69160519065173
task:
type: STS
- dataset:
config: default
name: MTEB STS16 (default)
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: cosine_pearson
value: 62.81619254162203
- type: cosine_spearman
value: 64.98814526698425
- type: euclidean_pearson
value: 66.43531796610995
- type: euclidean_spearman
value: 66.53768451143964
- type: main_score
value: 64.98814526698425
- type: manhattan_pearson
value: 66.57822125651369
- type: manhattan_spearman
value: 66.71830390508079
task:
type: STS
- dataset:
config: ar-ar
name: MTEB STS17 (ar-ar)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 81.68055610903552
- type: cosine_spearman
value: 82.18125783448961
- type: euclidean_pearson
value: 80.5422740473486
- type: euclidean_spearman
value: 81.79456727036232
- type: main_score
value: 82.18125783448961
- type: manhattan_pearson
value: 80.43564733654793
- type: manhattan_spearman
value: 81.76103816207625
task:
type: STS
- dataset:
config: ar
name: MTEB STS22 (ar)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 51.33460593849487
- type: cosine_spearman
value: 58.07741072443786
- type: euclidean_pearson
value: 54.26430308336828
- type: euclidean_spearman
value: 58.8384539429318
- type: main_score
value: 58.07741072443786
- type: manhattan_pearson
value: 54.41587176266624
- type: manhattan_spearman
value: 58.831993325957086
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark (default)
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: cosine_pearson
value: 61.11956207522431
- type: cosine_spearman
value: 61.16768766134144
- type: euclidean_pearson
value: 64.44141934993837
- type: euclidean_spearman
value: 63.450379593077066
- type: main_score
value: 61.16768766134144
- type: manhattan_pearson
value: 64.43852352892529
- type: manhattan_spearman
value: 63.57630045107761
task:
type: STS
- dataset:
config: default
name: MTEB SummEval (default)
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: cosine_pearson
value: 29.583566160417668
- type: cosine_spearman
value: 29.534419950502212
- type: dot_pearson
value: 28.13970643170574
- type: dot_spearman
value: 28.907762267009073
- type: main_score
value: 29.534419950502212
- type: pearson
value: 29.583566160417668
- type: spearman
value: 29.534419950502212
task:
type: Summarization
- name: SentenceTransformer based on UBC-NLP/MARBERTv2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 768
type: sts-test-768
metrics:
- type: pearson_cosine
value: 0.611168498883907
name: Pearson Cosine
- type: spearman_cosine
value: 0.6116733587939157
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6443687886661206
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6358107360369792
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.644404066642609
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6345893921062774
name: Spearman Euclidean
- type: pearson_dot
value: 0.4723643245352202
name: Pearson Dot
- type: spearman_dot
value: 0.44844519905410135
name: Spearman Dot
- type: pearson_max
value: 0.644404066642609
name: Pearson Max
- type: spearman_max
value: 0.6358107360369792
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 512
type: sts-test-512
metrics:
- type: pearson_cosine
value: 0.6664570291720014
name: Pearson Cosine
- type: spearman_cosine
value: 0.6647687532159875
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6429976947418544
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6334753432753939
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6466249455585532
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6373181315122213
name: Spearman Euclidean
- type: pearson_dot
value: 0.5370129457359227
name: Pearson Dot
- type: spearman_dot
value: 0.5241649973373772
name: Spearman Dot
- type: pearson_max
value: 0.6664570291720014
name: Pearson Max
- type: spearman_max
value: 0.6647687532159875
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 256
type: sts-test-256
metrics:
- type: pearson_cosine
value: 0.6601248277308522
name: Pearson Cosine
- type: spearman_cosine
value: 0.6592739654246011
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6361644543165994
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6250621947417249
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6408426652431157
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6300109524350457
name: Spearman Euclidean
- type: pearson_dot
value: 0.5250513197384045
name: Pearson Dot
- type: spearman_dot
value: 0.5154779060125071
name: Spearman Dot
- type: pearson_max
value: 0.6601248277308522
name: Pearson Max
- type: spearman_max
value: 0.6592739654246011
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 128
type: sts-test-128
metrics:
- type: pearson_cosine
value: 0.6549481034721005
name: Pearson Cosine
- type: spearman_cosine
value: 0.6523201621940143
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6342700090917214
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6226791710099966
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6397224689512541
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6280973341704362
name: Spearman Euclidean
- type: pearson_dot
value: 0.47240889358810917
name: Pearson Dot
- type: spearman_dot
value: 0.4633669926372942
name: Spearman Dot
- type: pearson_max
value: 0.6549481034721005
name: Pearson Max
- type: spearman_max
value: 0.6523201621940143
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 64
type: sts-test-64
metrics:
- type: pearson_cosine
value: 0.6367217585211098
name: Pearson Cosine
- type: spearman_cosine
value: 0.6370191671711296
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6263730801254332
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6118927366012856
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6327699647617465
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6180184829867724
name: Spearman Euclidean
- type: pearson_dot
value: 0.41169381399943167
name: Pearson Dot
- type: spearman_dot
value: 0.40444222536491986
name: Spearman Dot
- type: pearson_max
value: 0.6367217585211098
name: Pearson Max
- type: spearman_max
value: 0.6370191671711296
name: Spearman Max
---
# SentenceTransformer based on UBC-NLP/MARBERTv2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [UBC-NLP/MARBERTv2](https://huggingface.co/UBC-NLP/MARBERTv2) on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [UBC-NLP/MARBERTv2](https://huggingface.co/UBC-NLP/MARBERTv2) <!-- at revision fe88db9db8ccdb0c4e1627495f405c44a5f89066 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- Omartificial-Intelligence-Space/arabic-n_li-triplet
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/Marbert-all-nli-triplet")
# Run inference
sentences = [
'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.',
'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه',
'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6112 |
| **spearman_cosine** | **0.6117** |
| pearson_manhattan | 0.6444 |
| spearman_manhattan | 0.6358 |
| pearson_euclidean | 0.6444 |
| spearman_euclidean | 0.6346 |
| pearson_dot | 0.4724 |
| spearman_dot | 0.4484 |
| pearson_max | 0.6444 |
| spearman_max | 0.6358 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6665 |
| **spearman_cosine** | **0.6648** |
| pearson_manhattan | 0.643 |
| spearman_manhattan | 0.6335 |
| pearson_euclidean | 0.6466 |
| spearman_euclidean | 0.6373 |
| pearson_dot | 0.537 |
| spearman_dot | 0.5242 |
| pearson_max | 0.6665 |
| spearman_max | 0.6648 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6601 |
| **spearman_cosine** | **0.6593** |
| pearson_manhattan | 0.6362 |
| spearman_manhattan | 0.6251 |
| pearson_euclidean | 0.6408 |
| spearman_euclidean | 0.63 |
| pearson_dot | 0.5251 |
| spearman_dot | 0.5155 |
| pearson_max | 0.6601 |
| spearman_max | 0.6593 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6549 |
| **spearman_cosine** | **0.6523** |
| pearson_manhattan | 0.6343 |
| spearman_manhattan | 0.6227 |
| pearson_euclidean | 0.6397 |
| spearman_euclidean | 0.6281 |
| pearson_dot | 0.4724 |
| spearman_dot | 0.4634 |
| pearson_max | 0.6549 |
| spearman_max | 0.6523 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.6367 |
| **spearman_cosine** | **0.637** |
| pearson_manhattan | 0.6264 |
| spearman_manhattan | 0.6119 |
| pearson_euclidean | 0.6328 |
| spearman_euclidean | 0.618 |
| pearson_dot | 0.4117 |
| spearman_dot | 0.4044 |
| pearson_max | 0.6367 |
| spearman_max | 0.637 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 7.68 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.66 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.47 tokens</li><li>max: 40 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------|:--------------------------------------------|:------------------------------------|
| <code>شخص على حصان يقفز فوق طائرة معطلة</code> | <code>شخص في الهواء الطلق، على حصان.</code> | <code>شخص في مطعم، يطلب عجة.</code> |
| <code>أطفال يبتسمون و يلوحون للكاميرا</code> | <code>هناك أطفال حاضرون</code> | <code>الاطفال يتجهمون</code> |
| <code>صبي يقفز على لوح التزلج في منتصف الجسر الأحمر.</code> | <code>الفتى يقوم بخدعة التزلج</code> | <code>الصبي يتزلج على الرصيف</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 14.78 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.41 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.95 tokens</li><li>max: 21 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|:---------------------------------------------------|
| <code>امرأتان يتعانقان بينما يحملان حزمة</code> | <code>إمرأتان يحملان حزمة</code> | <code>الرجال يتشاجرون خارج مطعم</code> |
| <code>طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة.</code> | <code>طفلين يرتديان قميصاً مرقماً يغسلون أيديهم</code> | <code>طفلين يرتديان سترة يذهبان إلى المدرسة</code> |
| <code>رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس</code> | <code>رجل يبيع الدونات لعميل</code> | <code>امرأة تشرب قهوتها في مقهى صغير</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-512_spearman_cosine | sts-test-64_spearman_cosine | sts-test-768_spearman_cosine |
|:------:|:----:|:-------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:----------------------------:|
| 0.0229 | 200 | 25.0771 | - | - | - | - | - |
| 0.0459 | 400 | 9.1435 | - | - | - | - | - |
| 0.0688 | 600 | 8.0492 | - | - | - | - | - |
| 0.0918 | 800 | 7.1378 | - | - | - | - | - |
| 0.1147 | 1000 | 7.6249 | - | - | - | - | - |
| 0.1377 | 1200 | 7.3604 | - | - | - | - | - |
| 0.1606 | 1400 | 6.5783 | - | - | - | - | - |
| 0.1835 | 1600 | 6.4145 | - | - | - | - | - |
| 0.2065 | 1800 | 6.1781 | - | - | - | - | - |
| 0.2294 | 2000 | 6.2375 | - | - | - | - | - |
| 0.2524 | 2200 | 6.2587 | - | - | - | - | - |
| 0.2753 | 2400 | 6.0826 | - | - | - | - | - |
| 0.2983 | 2600 | 6.1514 | - | - | - | - | - |
| 0.3212 | 2800 | 5.6949 | - | - | - | - | - |
| 0.3442 | 3000 | 6.0062 | - | - | - | - | - |
| 0.3671 | 3200 | 5.7551 | - | - | - | - | - |
| 0.3900 | 3400 | 5.658 | - | - | - | - | - |
| 0.4130 | 3600 | 5.7135 | - | - | - | - | - |
| 0.4359 | 3800 | 5.3909 | - | - | - | - | - |
| 0.4589 | 4000 | 5.5068 | - | - | - | - | - |
| 0.4818 | 4200 | 5.2261 | - | - | - | - | - |
| 0.5048 | 4400 | 5.1674 | - | - | - | - | - |
| 0.5277 | 4600 | 5.0427 | - | - | - | - | - |
| 0.5506 | 4800 | 5.3824 | - | - | - | - | - |
| 0.5736 | 5000 | 5.3063 | - | - | - | - | - |
| 0.5965 | 5200 | 5.2174 | - | - | - | - | - |
| 0.6195 | 5400 | 5.2116 | - | - | - | - | - |
| 0.6424 | 5600 | 5.2226 | - | - | - | - | - |
| 0.6654 | 5800 | 5.2051 | - | - | - | - | - |
| 0.6883 | 6000 | 5.204 | - | - | - | - | - |
| 0.7113 | 6200 | 5.154 | - | - | - | - | - |
| 0.7342 | 6400 | 5.0236 | - | - | - | - | - |
| 0.7571 | 6600 | 4.9476 | - | - | - | - | - |
| 0.7801 | 6800 | 4.0164 | - | - | - | - | - |
| 0.8030 | 7000 | 3.5707 | - | - | - | - | - |
| 0.8260 | 7200 | 3.3586 | - | - | - | - | - |
| 0.8489 | 7400 | 3.2376 | - | - | - | - | - |
| 0.8719 | 7600 | 3.0282 | - | - | - | - | - |
| 0.8948 | 7800 | 2.901 | - | - | - | - | - |
| 0.9177 | 8000 | 2.9371 | - | - | - | - | - |
| 0.9407 | 8200 | 2.8362 | - | - | - | - | - |
| 0.9636 | 8400 | 2.8121 | - | - | - | - | - |
| 0.9866 | 8600 | 2.7105 | - | - | - | - | - |
| 1.0 | 8717 | - | 0.6523 | 0.6593 | 0.6648 | 0.6370 | 0.6117 |
### Framework Versions
- Python: 3.9.18
- Sentence Transformers: 3.0.1
- Transformers: 4.40.0
- PyTorch: 2.2.2+cu121
- Accelerate: 0.26.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
crimsonjoo/Llama3-Ko-LON-8B | crimsonjoo | 2024-06-25T10:51:38Z | 649 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:beomi/Llama-3-Open-Ko-8B",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-25T10:48:00Z | ---
base_model:
- beomi/Llama-3-Open-Ko-8B
- meta-llama/Meta-Llama-3-8B
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B)
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: meta-llama/Meta-Llama-3-8B
# no parameters necessary for base model
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
density: 0.5
weight: 0.5
- model: beomi/Llama-3-Open-Ko-8B
parameters:
density: 0.7
weight: 0.5
merge_method: dare_ties
base_model: meta-llama/Meta-Llama-3-8B
dtype: bfloat16
```
|
vamossyd/emtract-distilbert-base-uncased-emotion | vamossyd | 2024-02-26T19:15:22Z | 648 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"financial-emotion-analysis",
"emotion",
"twitter",
"stocktwits",
"en",
"dataset:vamossyd/finance_emotions",
"arxiv:2112.03868",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-17T22:58:34Z | ---
language:
- en
tags:
- text-classification
- distilbert
- financial-emotion-analysis
- emotion
- twitter
- stocktwits
- pytorch
license: mit
datasets:
- vamossyd/finance_emotions
metrics:
- accuracy
- precision
- recall
- f1
widget:
- text: "to the moon 🚀🚀🚀"
---
# EmTract (DistilBERT-Base-Uncased)
## Model Description
`emtract-distilbert-base-uncased-emotion` is a specialized model finetuned on a combination of [unify-emotion-datasets](https://github.com/sarnthil/unify-emotion-datasets), containing around 250K texts labeled across seven emotion categories: neutral, happy, sad, anger, disgust, surprise, and fear. This model was later adapted to a smaller set of 10K hand-tagged messages from StockTwits. The model is designed to excel at emotion detection in financial social media content such as that found on StockTwits.
Model parameters were as follows: sequence length of 64, learning rate of 2e-5, batch size of 128, trained for 8 epochs. For steps on how to use the model for inference, please refer to the accompanying Inference.ipynb notebook.
## Training Data
The first part of the training data was obtained from the Unify Emotion Datasets available at [here](https://github.com/sarnthil/unify-emotion-datasets). The second part I obtained
from social media and hand-tagged. It is available [here](https://huggingface.co/datasets/vamossyd/finance_emotions).
## Evaluation Metrics
The model was evaluated using the following metrics:
- Accuracy
- Precision
- Recall
- F1-score
## Research
The underlying research for emotion extraction from financial social media can be found on: [arxiv](https://arxiv.org/abs/2112.03868) and [SSRN](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3975884).
### Citation
Please cite the following if you use this model:
Vamossy, Domonkos F., and Rolf Skog. "EmTract: Extracting Emotions from Social Media." Available at SSRN 3975884 (2023).
BibTex citation:
```
@article{vamossy2023emtract,
title={EmTract: Extracting Emotions from Social Media},
author={Vamossy, Domonkos F and Skog, Rolf},
journal={Available at SSRN 3975884},
year={2023}
}
```
### Research using EmTract
[Social Media Emotions and IPO Returns](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4384573)
[Investor Emotions and Earnings Announcements](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3626025])
## License
This project is licensed under the terms of the MIT license. |
timm/vit_relpos_medium_patch16_224.sw_in1k | timm | 2023-05-05T22:04:22Z | 648 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2111.09883",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | 2022-12-23T00:20:58Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for vit_relpos_medium_patch16_224.sw_in1k
A Vision Transformer (ViT) image classification model. This is a `timm` specific variation of the ViT architecture with relative position embeddings, no class token, and final representation via global average pool of tokens. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Based on Swin Transformer train / pretrain recipe with modifications (related to both DeiT and ConvNeXt recipes)
* AdamW optimizer, gradient clipping, EMA weight averaging
* Cosine LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 38.7
- GMACs: 7.5
- Activations (M): 12.1
- Image size: 224 x 224
- **Papers:**
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_relpos_medium_patch16_224.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_relpos_medium_patch16_224.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 196, 512) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{liu2021swinv2,
title={Swin Transformer V2: Scaling Up Capacity and Resolution},
author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},
booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
|
timm/maxvit_tiny_rw_224.sw_in1k | timm | 2023-05-11T00:22:54Z | 648 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2204.01697",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-01-20T21:35:14Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for maxvit_tiny_rw_224.sw_in1k
A timm specific MaxViT image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 29.1
- GMACs: 5.1
- Activations (M): 33.1
- Image size: 224 x 224
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('maxvit_tiny_rw_224.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_tiny_rw_224.sw_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 256, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'maxvit_tiny_rw_224.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
TheBloke/speechless-codellama-34b-v2.0-GGUF | TheBloke | 2023-10-13T11:00:40Z | 648 | 9 | transformers | [
"transformers",
"gguf",
"llama",
"llama-2",
"code",
"text-generation",
"en",
"dataset:jondurbin/airoboros-2.2",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"arxiv:2308.12950",
"base_model:uukuguy/speechless-codellama-34b-v2.0",
"license:llama2",
"model-index",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-13T10:21:24Z | ---
base_model: uukuguy/speechless-codellama-34b-v2.0
datasets:
- jondurbin/airoboros-2.2
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_V2_196k
inference: false
language:
- en
library_name: transformers
license: llama2
model-index:
- name: SpeechlessCoder
results:
- dataset:
name: HumanEval
type: openai_humaneval
metrics:
- name: pass@1
type: pass@1
value: 75.61
verified: false
task:
type: text-generation
model_creator: Jiangwen Su
model_name: Speechless Codellama 34B v2.0
model_type: llama
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: TheBloke
tags:
- llama-2
- code
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Speechless Codellama 34B v2.0 - GGUF
- Model creator: [Jiangwen Su](https://huggingface.co/uukuguy)
- Original model: [Speechless Codellama 34B v2.0](https://huggingface.co/uukuguy/speechless-codellama-34b-v2.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jiangwen Su's Speechless Codellama 34B v2.0](https://huggingface.co/uukuguy/speechless-codellama-34b-v2.0).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF)
* [Jiangwen Su's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/uukuguy/speechless-codellama-34b-v2.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [speechless-codellama-34b-v2.0.Q2_K.gguf](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF/blob/main/speechless-codellama-34b-v2.0.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes |
| [speechless-codellama-34b-v2.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF/blob/main/speechless-codellama-34b-v2.0.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss |
| [speechless-codellama-34b-v2.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF/blob/main/speechless-codellama-34b-v2.0.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss |
| [speechless-codellama-34b-v2.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF/blob/main/speechless-codellama-34b-v2.0.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss |
| [speechless-codellama-34b-v2.0.Q4_0.gguf](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF/blob/main/speechless-codellama-34b-v2.0.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [speechless-codellama-34b-v2.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF/blob/main/speechless-codellama-34b-v2.0.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss |
| [speechless-codellama-34b-v2.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF/blob/main/speechless-codellama-34b-v2.0.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended |
| [speechless-codellama-34b-v2.0.Q5_0.gguf](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF/blob/main/speechless-codellama-34b-v2.0.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [speechless-codellama-34b-v2.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF/blob/main/speechless-codellama-34b-v2.0.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
| [speechless-codellama-34b-v2.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF/blob/main/speechless-codellama-34b-v2.0.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended |
| [speechless-codellama-34b-v2.0.Q6_K.gguf](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF/blob/main/speechless-codellama-34b-v2.0.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss |
| [speechless-codellama-34b-v2.0.Q8_0.gguf](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF/blob/main/speechless-codellama-34b-v2.0.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/speechless-codellama-34b-v2.0-GGUF and below it, a specific filename to download, such as: speechless-codellama-34b-v2.0.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/speechless-codellama-34b-v2.0-GGUF speechless-codellama-34b-v2.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/speechless-codellama-34b-v2.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/speechless-codellama-34b-v2.0-GGUF speechless-codellama-34b-v2.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m speechless-codellama-34b-v2.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/speechless-codellama-34b-v2.0-GGUF", model_file="speechless-codellama-34b-v2.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jiangwen Su's Speechless Codellama 34B v2.0
<p><h1> speechless-codellama-34b-v2.0 </h1></p>
Use the following datasets to fine-tune codellama/CodeLlama-34B in order to improve the model's inference and planning capabilities.
Total 153,013 samples.
- jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 23,462 samples.
- Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 74,440 samples.
- garage-bAInd/Open-Platypus: 100%, 24,926 samples.
- WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,185 samples
## HumanEval
| human-eval | pass@1 |
| --- | --- |
| humaneval-python | 75.61 |
[Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
| Models | pass@1 |
|------ | ------ |
| Phind-CodeLlama-34B-v2| 71.95|
| WizardCoder-Python-34B-V1.0| 70.73|
| Phind-CodeLlama-34B-Python-v1| 70.22|
| Phind-CodeLlama-34B-v1| 65.85|
| WizardCoder-Python-13B-V1.0| 62.19|
| WizardCoder-15B-V1.0| 58.12|
| CodeLlama-34B-Python| 53.29|
| CodeLlama-34B-Instruct| 50.79|
| CodeLlama-13B-Instruct| 50.6|
| CodeLlama-34B| 45.11|
| CodeLlama-13B-Python| 42.89|
| CodeLlama-13B| 35.07|
## NL2SQL
SQL-EVAL: 125/175 (71.43%)
Average rate of exact match: 67.43%
Average correct rate: 71.43%
- GPT4: 130/175 (74.29%)
- GPT3-Turbo-0613: 105/174 (60.00%)
## lm-evaluation-harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 54.35 |
| HellaSwag | 75.65 |
| MMLU | 54.67 |
| TruthfulQA | 45.21 |
| Average | 57.47 |
H800-80G x 2
transformers=4.33.0
flash-attn=2.1.0
bitsandbytes=0.41.1
peft=0.5.0
## Training Arguments
| | |
|------ | ------ |
| lr | 2e-4 |
| lr_scheduler_type | cosine |
| weight_decay | 0.0 |
| optim | paged_adamw_8bit |
| flash_attention | True |
| rerope | False |
| max_new_tokens | 8192 |
| num_train_epochs | 3 |
| bits | 4 |
| lora_r | 64 |
| lora_alpha | 16 |
| lora_dropout | 0.05 |
| double_quant | True |
| quant_type | nf4 |
| dataset_format | airoboros |
| mini_batch_size | 4 |
| grandient_accumulation_steps | 16 |
| bf16 | True |
| | |
|------ | ------ |
| epoch | 3.0 |
| etrain_loss | 0.4261 |
| etrain_runtime | 1 day, 14:42:57.87 |
| etrain_samples_per_second | 3.227 |
| etrain_steps_per_second | 0.025 |
| eeval_loss | 0.4537 |
| eeval_runtime | 0:00:36.19 |
| eeval_samples_per_second | 5.525 |
| eeval_steps_per_second | 2.763 |
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 13B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) |
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) |
## Model Use
To use this model, please make sure to install transformers from `main` until the next version is released:
```bash
pip install git+https://github.com/huggingface/transformers.git@main accelerate
```
Model capabilities:
- [x] Code completion.
- [x] Infilling.
- [ ] Instructions / chat.
- [ ] Python specialist.
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "codellama/CodeLlama-13b-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'import socket\n\ndef ping_exponential_backoff(host: str):',
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the base version of the 13B parameters model.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
<!-- original-model-card end -->
|
second-state/WizardCoder-Python-7B-v1.0-GGUF | second-state | 2024-03-20T07:18:02Z | 648 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"code",
"base_model:WizardLM/WizardCoder-Python-7b-V1.0",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-17T08:06:03Z | ---
license: llama2
library_name: transformers
tags:
- code
metrics:
- code_eval
base_model: WizardLM/WizardCoder-Python-7b-V1.0
inference: false
model_creator: WizardLM
model_type: llama
pipeline_tag: text-generation
quantized_by: Second State Inc.
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# WizardCoder-Python-7B-v1.0-GGUF
## Original Model
[WizardLM/WizardCoder-Python-7b-V1.0](https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0)
## Run with LlamaEdge
- LlamaEdge version: [v0.2.8](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.8) and above
- Prompt template
- Prompt type: `wizard-coder`
- Prompt string
```text
Below is an instruction that describes a task. Write a response that appropriately completes the request.
\### Instruction:
{instruction}
\### Response:
```
**Note that the \ character is used to escape the ### in the prompt string. Remove it in the practical use.**
- Context size: `4096`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:WizardCoder-Python-7B-V1.0-Q5_K_M.gguf llama-api-server.wasm -p wizard-coder
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:WizardCoder-Python-7B-V1.0-Q5_K_M.gguf llama-chat.wasm -p wizard-coder -s 'Below is an instruction that describes a task. Write a response that appropriately completes the request.'
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [WizardCoder-Python-7B-V1.0-Q2_K.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q2_K.gguf) | Q2_K | 2 | 2.53 GB| smallest, significant quality loss - not recommended for most purposes |
| [WizardCoder-Python-7B-V1.0-Q3_K_L.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| small, substantial quality loss |
| [WizardCoder-Python-7B-V1.0-Q3_K_M.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| very small, high quality loss |
| [WizardCoder-Python-7B-V1.0-Q3_K_S.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| very small, high quality loss |
| [WizardCoder-Python-7B-V1.0-Q4_0.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [WizardCoder-Python-7B-V1.0-Q4_K_M.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| medium, balanced quality - recommended |
| [WizardCoder-Python-7B-V1.0-Q4_K_S.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| small, greater quality loss |
| [WizardCoder-Python-7B-V1.0-Q5_0.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [WizardCoder-Python-7B-V1.0-Q5_K_M.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| large, very low quality loss - recommended |
| [WizardCoder-Python-7B-V1.0-Q5_K_S.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| large, low quality loss - recommended |
| [WizardCoder-Python-7B-V1.0-Q6_K.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q6_K.gguf) | Q6_K | 6 | 5.53 GB| very large, extremely low quality loss |
| [WizardCoder-Python-7B-V1.0-Q8_0.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| very large, extremely low quality loss - not recommended |
|
mPLUG/DocOwl1.5-Chat | mPLUG | 2024-04-10T07:32:16Z | 648 | 26 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mplug_docowl",
"OCR-free Document Understanding",
"Detailed Explanation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-28T01:44:51Z | ---
license: apache-2.0
language:
- en
tags:
- OCR-free Document Understanding
- Detailed Explanation
---
## Model Usage
refer to https://github.com/X-PLUG/mPLUG-DocOwl/tree/main/DocOwl1.5 |
RichardErkhov/athirdpath_-_Hestia-20b-gguf | RichardErkhov | 2024-06-02T16:43:17Z | 648 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-02T06:47:23Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Hestia-20b - GGUF
- Model creator: https://huggingface.co/athirdpath/
- Original model: https://huggingface.co/athirdpath/Hestia-20b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Hestia-20b.Q2_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q2_K.gguf) | Q2_K | 6.91GB |
| [Hestia-20b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.IQ3_XS.gguf) | IQ3_XS | 7.63GB |
| [Hestia-20b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.IQ3_S.gguf) | IQ3_S | 8.06GB |
| [Hestia-20b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [Hestia-20b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.IQ3_M.gguf) | IQ3_M | 8.53GB |
| [Hestia-20b.Q3_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q3_K.gguf) | Q3_K | 9.04GB |
| [Hestia-20b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [Hestia-20b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [Hestia-20b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [Hestia-20b.Q4_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q4_0.gguf) | Q4_0 | 10.52GB |
| [Hestia-20b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
| [Hestia-20b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q4_K_S.gguf) | Q4_K_S | 10.59GB |
| [Hestia-20b.Q4_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q4_K.gguf) | Q4_K | 11.22GB |
| [Hestia-20b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q4_K_M.gguf) | Q4_K_M | 11.22GB |
| [Hestia-20b.Q4_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q4_1.gguf) | Q4_1 | 11.67GB |
| [Hestia-20b.Q5_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q5_0.gguf) | Q5_0 | 12.83GB |
| [Hestia-20b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q5_K_S.gguf) | Q5_K_S | 12.83GB |
| [Hestia-20b.Q5_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q5_K.gguf) | Q5_K | 13.18GB |
| [Hestia-20b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [Hestia-20b.Q5_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q5_1.gguf) | Q5_1 | 13.98GB |
| [Hestia-20b.Q6_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q6_K.gguf) | Q6_K | 15.28GB |
| [Hestia-20b.Q8_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Hestia-20b-gguf/blob/main/Hestia-20b.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
license: cc-by-nc-4.0
---
This is a task_arithmetic merge of Harmonia (my 20b faux base model) with Noromaid and my LORA-glued Nethena. Solidly outperforms Harmonia.
merge_method: task_arithmetic
base_model: athirdpath/Harmonia-20b
models:
- model: athirdpath/Harmonia-20b
- model: NeverSleep/Noromaid-20b-v0.1.1
- parameters: weight: 0.25
- model: athirdpath/Nethena-20b-Glued
- parameters: weight: 0.2
dtype: float16
Thanks to Undi95 for pioneering the 20B recipe, and for most of the models involved.
|
CHE-72/TAIDE-LX-7B-Chat-Q8_0-GGUF | CHE-72 | 2024-06-22T17:12:18Z | 648 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:taide/TAIDE-LX-7B-Chat",
"license:other",
"region:us"
] | null | 2024-06-22T17:11:48Z | ---
base_model: taide/TAIDE-LX-7B-Chat
license: other
license_name: taide-l-models-community-license-agreement
license_link: https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: 您需要先同意授權條款才能使用此模型
extra_gated_fields:
姓名(Name): text
生日(Date of birth): date_picker
國家(Country): country
所屬單位(Affiliation): text
geo: ip_location
按下送出表示您同意社群授權同意書與個人資料蒐集告知聲明(By clicking Submit below I accept the terms of the license and privacy policy): checkbox
extra_gated_prompt: '* ### [TAIDE L 類模型社群授權同意書(License)](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view)
* ### [個人資料蒐集告知聲明(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view)'
extra_gated_button_content: 送出(Submit)
---
# CHE-72/TAIDE-LX-7B-Chat-Q8_0-GGUF
This model was converted to GGUF format from [`taide/TAIDE-LX-7B-Chat`](https://huggingface.co/taide/TAIDE-LX-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/taide/TAIDE-LX-7B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q8_0-GGUF --hf-file taide-lx-7b-chat-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q8_0-GGUF --hf-file taide-lx-7b-chat-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q8_0-GGUF --hf-file taide-lx-7b-chat-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q8_0-GGUF --hf-file taide-lx-7b-chat-q8_0.gguf -c 2048
```
|
deepmind/optical-flow-perceiver | deepmind | 2021-12-11T13:28:43Z | 647 | 10 | transformers | [
"transformers",
"pytorch",
"perceiver",
"dataset:autoflow",
"arxiv:2107.14795",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
datasets:
- autoflow
---
# Perceiver IO for optical flow
Perceiver IO model trained on [AutoFlow](https://autoflow-google.github.io/). It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver).
Optical flow is a decades-old open problem in computer vision. Given two images of the same scene (e.g. two consecutive frames of a video), the task is to estimate the 2D displacement for each pixel in the first image. This has many broader applications, such as navigation and visual odometry in robots, estimation of 3D geometry, and even to aid transfer of more complex, learned inference such as 3D human pose estimation from synthetic to real images.
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For optical flow, the output is a tensor containing the predicted flow of shape (batch_size, height, width, 2).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model on raw pixel values, by concatenating a pair of images and extracting a 3x3 patch around each pixel.
The model obtains state-of-the-art results on important optical flow benchmarks, including [Sintel](http://sintel.is.tue.mpg.de/) and [KITTI](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=flow).
## Intended uses & limitations
You can use the raw model for predicting optical flow between a pair of images. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other versions on a task that may interest you.
### How to use
We refer to the [tutorial notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Perceiver/Perceiver_for_Optical_Flow.ipynb) regarding using the Perceiver for optical flow.
## Training data
This model was trained on [AutoFlow](https://autoflow-google.github.io/), a synthetic dataset consisting of 400,000 annotated image pairs.
## Training procedure
### Preprocessing
Frames are resized to a resolution of 368x496. The authors concatenate the frames along the channel dimension and extract a 3x3 patch around each pixel (leading to 3x3x3x2 = 54 values for each pixel).
### Pretraining
Hyperparameter details can be found in Appendix E of the [paper](https://arxiv.org/abs/2107.14795).
## Evaluation results
The model achieves a average end-point error (EPE) of 1.81 on Sintel.clean, 2.42 on Sintel.final and 4.98 on KITTI. For evaluation results, we refer to table 4 of the [paper](https://arxiv.org/abs/2107.14795).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2107-14795,
author = {Andrew Jaegle and
Sebastian Borgeaud and
Jean{-}Baptiste Alayrac and
Carl Doersch and
Catalin Ionescu and
David Ding and
Skanda Koppula and
Daniel Zoran and
Andrew Brock and
Evan Shelhamer and
Olivier J. H{\'{e}}naff and
Matthew M. Botvinick and
Andrew Zisserman and
Oriol Vinyals and
Jo{\~{a}}o Carreira},
title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&}
Outputs},
journal = {CoRR},
volume = {abs/2107.14795},
year = {2021},
url = {https://arxiv.org/abs/2107.14795},
eprinttype = {arXiv},
eprint = {2107.14795},
timestamp = {Tue, 03 Aug 2021 14:53:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
timm/dla46_c.in1k | timm | 2023-04-24T21:12:33Z | 647 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1707.06484",
"license:bsd-3-clause",
"region:us"
] | image-classification | 2023-04-24T19:34:17Z | ---
tags:
- image-classification
- timm
library_name: timm
license: bsd-3-clause
datasets:
- imagenet-1k
---
# Model card for dla46_c.in1k
A DLA (Deep Layer Aggregation) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 1.3
- GMACs: 0.6
- Activations (M): 4.5
- Image size: 224 x 224
- **Papers:**
- Deep Layer Aggregation: https://arxiv.org/abs/1707.06484
- **Original:** https://github.com/ucbdrive/dla
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('dla46_c.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dla46_c.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 64, 28, 28])
# torch.Size([1, 128, 14, 14])
# torch.Size([1, 256, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dla46_c.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 256, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{yu2018deep,
title={Deep layer aggregation},
author={Yu, Fisher and Wang, Dequan and Shelhamer, Evan and Darrell, Trevor},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
year={2018}
}
```
|
digiplay/LemonCreami | digiplay | 2024-05-09T13:39:50Z | 647 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-31T06:59:45Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Very beautiful text-to-image model, pupils looks great!😍
Model info:
https://civitai.com/models/41384/lemoncreami
V1 version.
This is a very beautiful text-to-image model.
Balance between details of paintings and anime pictures,
Sample image I made: (Recommended to Apply *VAE* for better colors.)
 |
lorahub/flan_t5_large-quoref_Guess_Title_For_Context | lorahub | 2023-07-24T10:01:19Z | 647 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-24T10:01:09Z | ---
library_name: peft
---
|
kwoncho/KoFinBERT | kwoncho | 2024-04-04T05:40:02Z | 647 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-10-01T08:05:49Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
2024.4.4 Update
이 모형은 기업관련 뉴스에 포함된 문장의 긍정/중립/부정을 판단하기 위한 감성분석 모형입니다.
한국어 기반 금융/경영/회계 분야 감성분석 모형으로 사용하시면 됩니다.
예시>
삼성전자의 부채가 증가하고 있습니다. --> 중립 (neutral). 부채증가 자체는 부정적이라고 보기 어려움
경영전략의 실패로 삼성전자의 부채가 증가하고 있습니다. --> 부정 (negative). 실패로 인한 부채 증가는 부정적
현지원, 이준일, and 조현권. "KoBERT 를 이용한 기업관련 신문기사 감성 분류 연구." 회계학연구 47.4 (2022): 33-54.
위 논문에서 제안한 모델을 발전시켜 huggingface를 통해 공개합니다.
연구에 사용하실 경우 위 페이퍼를 cite 해 주시기 바랍니다.
해당 모델은 https://huggingface.co/jhgan/ko-sroberta-multitask 를 사용하여 finetuing 한 모형입니다.
사용 코드는 아래 링크를 참고하셔요
구글 코랩:
https://colab.research.google.com/drive/1ORzKUr94cPyc5jaRCAngbclm4Qb4DtdG
현재 모형의 evaluation 결과는 다음과 같습니다.
{'eval_loss': 0.7330707907676697,
'eval_f1': 0.8689251403360293,
'eval_runtime': 0.464,
'eval_samples_per_second': 2047.32,
'eval_steps_per_second': 17.241,
'epoch': 33.33}
정확도 기준으로 논문의 85.7% 에 비해 상승하였으나, 상승폭이 현저하지는 않습니다.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Buseak/vowelizer_1203_v11 | Buseak | 2024-03-17T11:33:57Z | 647 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"canine",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-17T10:12:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: vowelizer_1203_v11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vowelizer_1203_v11
This model is a fine-tuned version of [Buseak/vowelizer_1203_v9](https://huggingface.co/Buseak/vowelizer_1203_v9) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Precision: 1.0000
- Recall: 1.0000
- F1: 1.0000
- Accuracy: 1.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0659 | 1.0 | 967 | 0.0290 | 0.9908 | 0.9845 | 0.9877 | 0.9920 |
| 0.0394 | 2.0 | 1934 | 0.0166 | 0.9950 | 0.9921 | 0.9936 | 0.9955 |
| 0.0271 | 3.0 | 2901 | 0.0098 | 0.9967 | 0.9958 | 0.9963 | 0.9974 |
| 0.0202 | 4.0 | 3868 | 0.0059 | 0.9981 | 0.9978 | 0.9979 | 0.9984 |
| 0.0152 | 5.0 | 4835 | 0.0037 | 0.9989 | 0.9982 | 0.9985 | 0.9991 |
| 0.0119 | 6.0 | 5802 | 0.0026 | 0.9992 | 0.9989 | 0.9990 | 0.9993 |
| 0.01 | 7.0 | 6769 | 0.0017 | 0.9995 | 0.9992 | 0.9994 | 0.9996 |
| 0.0077 | 8.0 | 7736 | 0.0013 | 0.9995 | 0.9995 | 0.9995 | 0.9997 |
| 0.0062 | 9.0 | 8703 | 0.0009 | 0.9996 | 0.9997 | 0.9997 | 0.9998 |
| 0.0062 | 10.0 | 9670 | 0.0006 | 0.9998 | 0.9998 | 0.9998 | 0.9999 |
| 0.0051 | 11.0 | 10637 | 0.0006 | 0.9998 | 0.9997 | 0.9998 | 0.9999 |
| 0.0043 | 12.0 | 11604 | 0.0004 | 0.9999 | 0.9999 | 0.9999 | 0.9999 |
| 0.0036 | 13.0 | 12571 | 0.0003 | 0.9999 | 0.9999 | 0.9999 | 0.9999 |
| 0.0031 | 14.0 | 13538 | 0.0002 | 0.9999 | 0.9999 | 0.9999 | 1.0000 |
| 0.0027 | 15.0 | 14505 | 0.0002 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
| 0.0025 | 16.0 | 15472 | 0.0001 | 1.0000 | 0.9999 | 0.9999 | 1.0000 |
| 0.0021 | 17.0 | 16439 | 0.0001 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
| 0.0019 | 18.0 | 17406 | 0.0001 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
| 0.0017 | 19.0 | 18373 | 0.0001 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
| 0.0016 | 20.0 | 19340 | 0.0001 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
|
unsloth/gemma-1.1-7b-it | unsloth | 2024-04-18T15:03:46Z | 647 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"gemma-7b",
"bnb",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-07T10:26:40Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- gemma
- gemma-7b
- bnb
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
We have a Google Colab Tesla T4 notebook for Gemma 7b here: https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
georgesung/llama3_8b_chat_uncensored | georgesung | 2024-04-30T13:53:42Z | 647 | 5 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gguf",
"llama",
"text-generation",
"dataset:georgesung/wizard_vicuna_70k_unfiltered",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-30T09:27:14Z | ---
license: other
datasets:
- georgesung/wizard_vicuna_70k_unfiltered
---
# Overview
Fine-tuned [Llama-3 8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) with an uncensored/unfiltered Wizard-Vicuna conversation dataset.
Used QLoRA for fine-tuning.
The model here includes the fp32 HuggingFace version, plus a quantized 4-bit q4_0 [gguf version](https://huggingface.co/georgesung/llama3_8b_chat_uncensored/resolve/main/llama3_8b_chat_uncensored_q4_0.gguf?download=true).
# Prompt style
The model was trained with the following prompt style:
```
### HUMAN:
Hello
### RESPONSE:
Hi, how are you?
### HUMAN:
I'm fine.
### RESPONSE:
How can I help you?
...
```
# Training code
Code used to train the model is available [here](https://github.com/georgesung/llm_qlora).
To reproduce the results:
```
git clone https://github.com/georgesung/llm_qlora
cd llm_qlora
pip install -r requirements.txt
python train.py configs/llama3_8b_chat_uncensored.yaml
```
# Fine-tuning guide
https://georgesung.github.io/ai/qlora-ift/
# Ollama inference
First, install [Ollama](https://ollama.com/). Based on instructions [here](https://github.com/ollama/ollama/blob/main/README.md#import-from-gguf), run the following:
```
cd $MODEL_DIR_OF_CHOICE
wget https://huggingface.co/georgesung/llama3_8b_chat_uncensored/resolve/main/llama3_8b_chat_uncensored_q4_0.gguf
```
Create a file called `llama3-uncensored.modelfile` with the following:
```
FROM ./llama3_8b_chat_uncensored_q4_0.gguf
TEMPLATE """{{ .System }}
### HUMAN:
{{ .Prompt }}
### RESPONSE:
"""
PARAMETER stop "### HUMAN:"
PARAMETER stop "### RESPONSE:"
```
Then run:
```
ollama create llama3-uncensored -f llama3-uncensored.modelfile
ollama run llama3-uncensored
```
|
mradermacher/SOLAR-10.7B-NahIdWin-GGUF | mradermacher | 2024-06-05T10:21:19Z | 647 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/SOLAR-10.7B-NahIdWin",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-04T22:22:07Z | ---
base_model: Sao10K/SOLAR-10.7B-NahIdWin
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sao10K/SOLAR-10.7B-NahIdWin
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-NahIdWin-GGUF/resolve/main/SOLAR-10.7B-NahIdWin.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF | mradermacher | 2024-06-29T05:40:23Z | 647 | 0 | transformers | [
"transformers",
"gguf",
"yi",
"moe",
"DPO",
"en",
"base_model:cloudyu/Yi-34Bx2-MoE-60B-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-13T13:45:43Z | ---
base_model: cloudyu/Yi-34Bx2-MoE-60B-DPO
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
license_name: yi-license
quantized_by: mradermacher
tags:
- yi
- moe
- DPO
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/cloudyu/Yi-34Bx2-MoE-60B-DPO
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-IQ1_S.gguf) | i1-IQ1_S | 12.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-IQ1_M.gguf) | i1-IQ1_M | 14.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 16.3 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-IQ2_XS.gguf) | i1-IQ2_XS | 18.1 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-IQ2_S.gguf) | i1-IQ2_S | 18.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-IQ2_M.gguf) | i1-IQ2_M | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-Q2_K.gguf) | i1-Q2_K | 22.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 23.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-IQ3_XS.gguf) | i1-IQ3_XS | 25.1 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-Q3_K_S.gguf) | i1-Q3_K_S | 26.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-IQ3_S.gguf) | i1-IQ3_S | 26.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-IQ3_M.gguf) | i1-IQ3_M | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-Q3_K_M.gguf) | i1-Q3_K_M | 29.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-Q3_K_L.gguf) | i1-Q3_K_L | 31.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-IQ4_XS.gguf) | i1-IQ4_XS | 32.6 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-Q4_0.gguf) | i1-Q4_0 | 34.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-Q4_K_S.gguf) | i1-Q4_K_S | 34.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-Q4_K_M.gguf) | i1-Q4_K_M | 36.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-Q5_K_S.gguf) | i1-Q5_K_S | 42.0 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-Q5_K_M.gguf) | i1-Q5_K_M | 43.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-DPO-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B-DPO.i1-Q6_K.gguf) | i1-Q6_K | 50.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Helsinki-NLP/opus-mt-sv-fi | Helsinki-NLP | 2023-08-16T12:05:05Z | 646 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sv",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-fi
* source languages: sv
* target languages: fi
* OPUS readme: [sv-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-fi/README.md)
* dataset: opus+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt-2020-04-07.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-fi/opus+bt-2020-04-07.zip)
* test set translations: [opus+bt-2020-04-07.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-fi/opus+bt-2020-04-07.test.txt)
* test set scores: [opus+bt-2020-04-07.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-fi/opus+bt-2020-04-07.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| fiskmo_testset.sv.fi | 26.9 | 0.623 |
| Tatoeba.sv.fi | 45.2 | 0.678 |
|
mbien/recipenlg | mbien | 2023-08-30T20:18:45Z | 646 | 5 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | # RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation
Model accompanying our INLG 2020 paper: [RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation](https://www.aclweb.org/anthology/2020.inlg-1.4.pdf)
## Where is the dataset?
Please visit the website of our project: [recipenlg.cs.put.poznan.pl](https://recipenlg.cs.put.poznan.pl/) to download it.
## How to use the model? Could you explain X andy Y?
Yes, sure! If you feel some information is missing in our paper, please check first in our [thesis](https://www.researchgate.net/publication/345308878_Cooking_recipes_generator_utilizing_a_deep_learning-based_language_model), which is much more detailed. In case of further questions, you're invited to send us a github issue, we will respond as fast as we can!
|
huggingtweets/interiordesign | huggingtweets | 2022-07-27T15:30:24Z | 646 | 3 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-07-27T15:21:57Z | ---
language: en
thumbnail: http://www.huggingtweets.com/interiordesign/1658935819881/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1544346507578589184/x9URB7Yy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Interior Design</div>
<div style="text-align: center; font-size: 14px;">@interiordesign</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Interior Design.
| Data | Interior Design |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 97 |
| Short tweets | 2 |
| Tweets kept | 3151 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/vl5m9w7s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @interiordesign's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36lgkxh5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36lgkxh5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/interiordesign')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
timm/swin_tiny_patch4_window7_224.ms_in22k | timm | 2024-02-10T23:31:47Z | 646 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2103.14030",
"license:mit",
"region:us"
] | image-classification | 2023-03-18T04:15:11Z | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-22k
---
# Model card for swin_tiny_patch4_window7_224.ms_in22k
A Swin Transformer image classification model. Pretrained on ImageNet-22k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 44.3
- GMACs: 4.5
- Activations (M): 17.1
- Image size: 224 x 224
- **Papers:**
- Swin Transformer: Hierarchical Vision Transformer using Shifted Windows: https://arxiv.org/abs/2103.14030
- **Original:** https://github.com/microsoft/Swin-Transformer
- **Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('swin_tiny_patch4_window7_224.ms_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swin_tiny_patch4_window7_224.ms_in22k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for swin_base_patch4_window7_224 (NHWC output)
# torch.Size([1, 56, 56, 128])
# torch.Size([1, 28, 28, 256])
# torch.Size([1, 14, 14, 512])
# torch.Size([1, 7, 7, 1024])
# e.g. for swinv2_cr_small_ns_224 (NCHW output)
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swin_tiny_patch4_window7_224.ms_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2
# or (batch_size, num_features, H, W) for swinv2_cr
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{liu2021Swin,
title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Yntec/OpenGenDiffusers | Yntec | 2024-02-28T04:13:33Z | 646 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"art",
"artistic",
"protogen",
"darkstorm2150",
"Rexts",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-26T00:14:24Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- art
- artistic
- diffusers
- protogen
- darkstorm2150
- Rexts
inference: true
---
# OpenGen Diffusers
Diffusers version of OpenGen with the Color101VAE baked in.
Sample Image and Prompt:
<center><img src="https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/lo4Tw0iJ9AM-yDRx2oeca.png" style="height:640px; width:640px; border-radius: 7%; border: 5px solid #663380; padding-top:0px;" span title="OpenGen Raw Output"></center>
Pretty cute girl carrying Cinema 4d colorful render, organic, ultra detailed, of stars and rainbows, scratched, biomechanical costume, syringes, beaming shining light, analog, macro lens, beautiful natural soft rim light, neon, lights, smoke, winged insects and stems, roots, fine foliage lace, colorful details, rick owens, art nouveau fashion embroidered
Original Pages:
https://huggingface.co/darkstorm2150/OpenGen/
https://civitai.com/models/70248/color101-vae
Recipe:
 |
TheBloke/Pygmalion-2-13B-SuperCOT-GGUF | TheBloke | 2023-09-27T12:48:38Z | 646 | 7 | transformers | [
"transformers",
"gguf",
"llama",
"llama-2",
"text-generation",
"en",
"base_model:royallab/Pygmalion-2-13b-SuperCOT",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-10T13:10:10Z | ---
language:
- en
license: llama2
library_name: transformers
tags:
- llama
- llama-2
model_name: Pygmalion 2 13B SuperCOT
base_model: royallab/Pygmalion-2-13b-SuperCOT
inference: false
model_creator: The Royal Lab
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Pygmalion 2 13B SuperCOT - GGUF
- Model creator: [The Royal Lab](https://huggingface.co/royallab)
- Original model: [Pygmalion 2 13B SuperCOT](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCOT)
<!-- description start -->
## Description
This repo contains GGUF format model files for [The Royal Lab's Pygmalion 2 13B SuperCOT](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCOT).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF)
* [The Royal Lab's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCOT)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [pygmalion-2-13b-supercot.Q2_K.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF/blob/main/pygmalion-2-13b-supercot.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [pygmalion-2-13b-supercot.Q3_K_S.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF/blob/main/pygmalion-2-13b-supercot.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [pygmalion-2-13b-supercot.Q3_K_M.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF/blob/main/pygmalion-2-13b-supercot.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [pygmalion-2-13b-supercot.Q3_K_L.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF/blob/main/pygmalion-2-13b-supercot.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [pygmalion-2-13b-supercot.Q4_0.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF/blob/main/pygmalion-2-13b-supercot.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [pygmalion-2-13b-supercot.Q4_K_S.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF/blob/main/pygmalion-2-13b-supercot.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [pygmalion-2-13b-supercot.Q4_K_M.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF/blob/main/pygmalion-2-13b-supercot.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [pygmalion-2-13b-supercot.Q5_0.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF/blob/main/pygmalion-2-13b-supercot.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [pygmalion-2-13b-supercot.Q5_K_S.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF/blob/main/pygmalion-2-13b-supercot.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [pygmalion-2-13b-supercot.Q5_K_M.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF/blob/main/pygmalion-2-13b-supercot.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [pygmalion-2-13b-supercot.Q6_K.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF/blob/main/pygmalion-2-13b-supercot.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [pygmalion-2-13b-supercot.Q8_0.gguf](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF/blob/main/pygmalion-2-13b-supercot.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Pygmalion-2-13B-SuperCOT-GGUF and below it, a specific filename to download, such as: pygmalion-2-13b-supercot.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Pygmalion-2-13B-SuperCOT-GGUF pygmalion-2-13b-supercot.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Pygmalion-2-13B-SuperCOT-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Pygmalion-2-13B-SuperCOT-GGUF pygmalion-2-13b-supercot.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m pygmalion-2-13b-supercot.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Pygmalion-2-13B-SuperCOT-GGUF", model_file="pygmalion-2-13b-supercot.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: The Royal Lab's Pygmalion 2 13B SuperCOT
# Model Card: Pygmalion-2-13b-SuperCOT
This is a merge between:
- [Pygmalion 2 13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
- [Ausboss's Llama2 SuperCOT loras](https://huggingface.co/ausboss/llama2-13b-supercot-loras) at a weight of 1.00.
Quantizations provided by us and TheBloke:
- [GGUF](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCOT-GGUF)
- [GGUF (TheBloke)](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GGUF)
- [GPTQ](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-GPTQ)
The merge was performed by a commandline version of [EzTrainer](https://github.com/CoffeeVampir3/ez-trainer) by CoffeeVampire/Blackroot via [zaraki-tools](https://github.com/CoffeeVampir3/ez-trainer) by Zaraki.
The intended objective is to make Pygmalion-2 smarter and try to make it drift off less.
The SuperCOT lora was merged at a weight of 1.
## Usage:
Since this is a merge between Pygmalion-2 and SuperCOT, the following instruction formats should work:
Metharme:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>
```
Alpaca:
```
### Instruction:
Your instruction or question here.
### Response:
```
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
<!-- original-model-card end -->
|
mesolitica/mallam-3B-4096 | mesolitica | 2023-12-28T01:08:53Z | 646 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"ms",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-27T06:19:25Z | ---
language:
- ms
---
# MaLLaM 🌙 3B (Malaysia Large Language Model), Pretrain 3B 4096 context length on Malaysian text
Pretrain from scratch 3B parameters using Mistral architecture on 90B Malaysian text tokens.
README at https://github.com/mesolitica/malaya/tree/5.1/pretrained-model/mistral
- Trained on 90B tokens, gathered at https://github.com/malaysia-ai/dedup-text-dataset/tree/main/pretrain-llm
- We use Ray cluster to train on 5 nodes of 4x A100 80GB, https://github.com/malaysia-ai/jupyter-gpu/tree/main/ray
WandB, https://wandb.ai/mesolitica/pretrain-mistral-3b?workspace=user-husein-mesolitica
WandB report, https://wandb.ai/mesolitica/pretrain-mistral-3b/reports/Pretrain-Larger-Malaysian-Mistral--Vmlldzo2MDkyOTgz
Technical report, https://github.com/mesolitica/malaya/wiki/MaLLaM-%F0%9F%8C%99-Malaysia-Large-Language-Model
## how-to
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
TORCH_DTYPE = 'bfloat16'
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=getattr(torch, TORCH_DTYPE)
)
tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-mistral-3B-4096')
model = AutoModelForCausalLM.from_pretrained(
'mesolitica/malaysian-mistral-3B-4096',
use_flash_attention_2 = True,
quantization_config = nf4_config
)
prompt = '<s>nama saya'
inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda')
generate_kwargs = dict(
inputs,
max_new_tokens=512,
top_p=0.95,
top_k=50,
temperature=0.9,
do_sample=True,
num_beams=1,
repetition_penalty=1.05,
)
r = model.generate(**generate_kwargs)
``` |
TheBloke/notus-7B-v1-GGUF | TheBloke | 2023-12-04T15:36:53Z | 646 | 23 | transformers | [
"transformers",
"gguf",
"mistral",
"dpo",
"rlaif",
"preference",
"ultrafeedback",
"text-generation",
"en",
"dataset:argilla/ultrafeedback-binarized-preferences",
"base_model:argilla/notus-7b-v1",
"license:mit",
"model-index",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-03T18:47:50Z | ---
base_model: argilla/notus-7b-v1
datasets:
- argilla/ultrafeedback-binarized-preferences
inference: false
language:
- en
library_name: transformers
license: mit
model-index:
- name: notus-7b-v1
results:
- dataset:
args:
num_few_shot: 25
config: ARC-Challenge
name: AI2 Reasoning Challenge (25-Shot)
split: test
type: ai2_arc
metrics:
- name: normalized accuracy
type: acc_norm
value: 0.6459044368600683
source:
name: Open LLM Leaderboard Results
url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 10
name: HellaSwag (10-Shot)
split: validation
type: hellaswag
metrics:
- name: normalized accuracy
type: acc_norm
value: 0.8478390758812986
source:
name: Open LLM Leaderboard Results
url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 3
name: Drop (3-Shot)
split: validation
type: drop
metrics:
- name: f1 score
type: f1
value: 0.08913590604026835
source:
name: Open LLM Leaderboard Results
url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 0
config: multiple_choice
name: TruthfulQA (0-shot)
split: validation
type: truthful_qa
metrics:
- type: mc2
value: 0.5436768358952805
source:
name: Open LLM Leaderboard Results
url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 5
config: all
name: MMLU (5-Shot)
split: test
type: cais/mmlu
metrics:
- name: accuracy
type: acc
value: 0.6303308230938872
source:
name: Open LLM Leaderboard Results
url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 5
config: main
name: GSM8k (5-shot)
split: test
type: gsm8k
metrics:
- name: accuracy
type: acc
value: 0.1516300227445034
source:
name: Open LLM Leaderboard Results
url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json
task:
name: Text Generation
type: text-generation
- dataset:
args:
num_few_shot: 5
config: winogrande_xl
name: Winogrande (5-shot)
split: validation
type: winogrande
metrics:
- name: accuracy
type: acc
value: 0.7940015785319653
source:
name: Open LLM Leaderboard Results
url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json
task:
name: Text Generation
type: text-generation
- dataset:
name: AlpacaEval
type: tatsu-lab/alpaca_eval
metrics:
- name: win rate
type: tatsu-lab/alpaca_eval
value: 0.9142
source:
url: https://tatsu-lab.github.io/alpaca_eval/
task:
name: Text Generation
type: text-generation
- dataset:
name: MT-Bench
type: unknown
metrics:
- name: score
type: unknown
value: 7.3
source:
url: https://huggingface.co/spaces/lmsys/mt-bench
task:
name: Text Generation
type: text-generation
model_creator: Argilla
model_name: Notus 7B v1
model_type: mistral
pipeline_tag: text-generation
prompt_template: '<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
'
quantized_by: TheBloke
tags:
- dpo
- rlaif
- preference
- ultrafeedback
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Notus 7B v1 - GGUF
- Model creator: [Argilla](https://huggingface.co/argilla)
- Original model: [Notus 7B v1](https://huggingface.co/argilla/notus-7b-v1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Argilla's Notus 7B v1](https://huggingface.co/argilla/notus-7b-v1).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/notus-7B-v1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/notus-7B-v1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/notus-7B-v1-GGUF)
* [Argilla's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/argilla/notus-7b-v1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Zephyr
```
<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [notus-7b-v1.Q2_K.gguf](https://huggingface.co/TheBloke/notus-7B-v1-GGUF/blob/main/notus-7b-v1.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [notus-7b-v1.Q3_K_S.gguf](https://huggingface.co/TheBloke/notus-7B-v1-GGUF/blob/main/notus-7b-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [notus-7b-v1.Q3_K_M.gguf](https://huggingface.co/TheBloke/notus-7B-v1-GGUF/blob/main/notus-7b-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [notus-7b-v1.Q3_K_L.gguf](https://huggingface.co/TheBloke/notus-7B-v1-GGUF/blob/main/notus-7b-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [notus-7b-v1.Q4_0.gguf](https://huggingface.co/TheBloke/notus-7B-v1-GGUF/blob/main/notus-7b-v1.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [notus-7b-v1.Q4_K_S.gguf](https://huggingface.co/TheBloke/notus-7B-v1-GGUF/blob/main/notus-7b-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [notus-7b-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/notus-7B-v1-GGUF/blob/main/notus-7b-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [notus-7b-v1.Q5_0.gguf](https://huggingface.co/TheBloke/notus-7B-v1-GGUF/blob/main/notus-7b-v1.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [notus-7b-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/notus-7B-v1-GGUF/blob/main/notus-7b-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [notus-7b-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/notus-7B-v1-GGUF/blob/main/notus-7b-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [notus-7b-v1.Q6_K.gguf](https://huggingface.co/TheBloke/notus-7B-v1-GGUF/blob/main/notus-7b-v1.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [notus-7b-v1.Q8_0.gguf](https://huggingface.co/TheBloke/notus-7B-v1-GGUF/blob/main/notus-7b-v1.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/notus-7B-v1-GGUF and below it, a specific filename to download, such as: notus-7b-v1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/notus-7B-v1-GGUF notus-7b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/notus-7B-v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/notus-7B-v1-GGUF notus-7b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m notus-7b-v1.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|system|>\n</s>\n<|user|>\n{prompt}</s>\n<|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./notus-7b-v1.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|system|>\n</s>\n<|user|>\n{prompt}</s>\n<|assistant|>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./notus-7b-v1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Argilla's Notus 7B v1
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/60f0608166e5701b80ed3f02/dj-spsk9eXMMXVGxK6jRz.png" alt="A banner representing Notus, the wind god of the south, in a mythical and artistic style. The banner features a strong, swirling breeze, embodying the warm, wet character of the southern wind. Gracefully flowing across the scene are several paper planes, caught in the gentle yet powerful gusts of Notus. The background is a blend of warm colors, symbolizing the heat of the south, with hints of blue and green to represent the moisture carried by this wind. The overall atmosphere is one of dynamic movement and warmth."/>
</div>
# Model Card for Notus 7B v1
Notus is a collection of fine-tuned models using Direct Preference Optimization (DPO) and related RLHF techniques. This model is the first version, fine-tuned with DPO over `zephyr-7b-sft-full`, which is the SFT model produced to create `zephyr-7b-beta`.
Following a **data-first** approach, the only difference between Notus-7B-v1 and Zephyr-7B-beta is the preference dataset used for dDPO.
In particular, when we started building [distilabel](https://github.com/argilla-io/distilabel), we invested time understanding and deep-diving into the UltraFeedback dataset. Using [Argilla](https://argilla.io/), we've found data issues in the original UltraFeedback dataset, leading to high-scores for bad responses (more details in the training data section). After curating several hundreds of data points, we decided to binarize the dataset using the preference ratings, instead of the original critique `overall_score`, and verified the new dataset with Argilla.
Using preference ratings, instead of critiques scores, led to a new dataset where the chosen response is different in ~50% of the cases. Using this new dataset with DPO we fine-tuned Notus, a 7B model, that **surpasses Zephyr-7B-beta and Claude 2 on AlpacaEval**.
> **Important note**: While we opted for the average of multi-aspect ratings, while we fix the original dataset, a very interesting open question remains: once critique data is fixed, what works better? using the critique scores or the preference ratings? We're very excited to do this comparison in the coming weeks, stay tuned!
This model **wouldn't have been possible without the amazing [Alignment Handbook](https://github.com/huggingface/alignment-handbook), [OpenBMB](https://www.openbmb.cn/home) for releasing the Ultrafeedback dataset**, and it's based on fruitful discussions with the HuggingFace H4 team. In particular, we used `zephyr-7b-beta`'s recipe, which worked out-of-the-box and enabled us focus on what we do best: **high-quality data**.
Notus models are intended to be used as assistants via chat-like applications, and are evaluated with Chat (MT-Bench, AlpacaEval) and Academic (Open LLM Leaderboard) benchmarks for a direct comparison with the original Zephyr dDPO model and other 7B models.
> **Why Notus?**: Notus name comes from the ancient Greek god Notus, as a wink to Zephyr, which comes from the ancient Greek god Zephyrus; with the difference that Notus is the god of the south wind, and Zephyr the god of the west wind. More information at https://en.wikipedia.org/wiki/Anemoi.
## Model Details
### Model Description
- **Developed by:** Argilla (based on HuggingFace H4 and MistralAI previous efforts and amazing work)
- **Shared by:** Argilla
- **Model type:** GPT-like 7B model DPO fine-tuned
- **Language(s) (NLP):** Mainly English
- **License:** MIT (same as Zephyr 7B-beta)
- **Finetuned from model:** [`alignment-handbook/zephyr-7b-sft-full`](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full)
### Model Sources
- **Repository:** https://github.com/argilla-io/notus
- **Paper:** N/A
- **Demo:** https://argilla-notus-chat-ui.hf.space/
## Performance
### Chat benchmarks
Table adapted from Zephyr-7b-β and Starling's original tables for [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks. Results are shown sorted by AlpacaEval win rates and ommit some >7B for brevity.
Notus stays on par with Zephyr on MT-Bench, while surpassing Zephyr, Claude 2, and Cohere Command on AlpacaEval. Making Notus the most-competitive 7B commercial model on AlpacaEval.
<table>
<tr>
<th>Model</th>
<th>Size</th>
<th>Alignment</th>
<th>MT-Bench (score)</th>
<th>AlpacaEval (win rate %)</th>
<th>License</th>
</tr>
<tr>
<td>GPT-4-turbo</td>
<td>-</td>
<td>?</td>
<td>9.32</td>
<td>97.70</td>
<td>Proprietary</td>
</tr>
<tr>
<td>XwinLM 70b V0.1</td>
<td>70B</td>
<td>dPPO</td>
<td>-</td>
<td>95.57</td>
<td>LLaMA 2 License</td>
</tr>
<tr>
<td>GPT-4</td>
<td>-</td>
<td>RLHF</td>
<td>8.99</td>
<td>95.03</td>
<td>Proprietary</td>
</tr>
<tr>
<td>Tulu 2+DPO 70B V0.1</td>
<td>70B</td>
<td>dDPO</td>
<td>6.29</td>
<td>95.28</td>
<td>Proprietary</td>
</tr>
<tr>
<td>LLaMA2 Chat 70B</td>
<td>70B</td>
<td>RLHF</td>
<td>6.86</td>
<td>92.66</td>
<td>LLaMA 2 License</td>
</tr>
<tr>
<td>Starling-7B</td>
<td>7B</td>
<td>C-RLFT + APA</td>
<td><strong>8.09</strong></td>
<td><strong>91.99</strong></td>
<td>CC-BY-NC-4.0</td>
</tr>
<tr style="background-color: #FFFF99;">
<td><strong>Notus-7b-v1</strong></td>
<td>7B</td>
<td>dDPO</td>
<td>7.30</td>
<td>91.42</td>
<td>MIT</td>
</tr>
<tr>
<td>Claude 2</td>
<td>-</td>
<td>RLHF</td>
<td>8.06</td>
<td>91.36</td>
<td>Proprietary</td>
</tr>
<tr>
<td>Zephyr-7b-β</td>
<td>7B</td>
<td>dDPO</td>
<td>7.34</td>
<td>90.60</td>
<td>MIT</td>
</tr>
<tr>
<td>Cohere Command</td>
<td>-</td>
<td>RLHF</td>
<td>-</td>
<td>90.62</td>
<td>Proprietary</td>
</tr>
<tr>
<td>GPT-3.5-turbo</td>
<td>-</td>
<td>RLHF</td>
<td>7.94</td>
<td>89.37</td>
<td>Proprietary</td>
</tr>
</table>
## Academic benchmarks
Results from [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard):
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
|-----------------------------------------------|---------|-------|-----------|-------|------------|------------|-------|-------|
| Zephyr 7B dDPO (HuggingFaceH4/zephyr-7b-beta) | 52.15 | 62.03 | 84.36 | 61.07 | **57.45** | 77.74 | 12.74 | **9.66** |
| argilla/notus-7b-v1 | **52.89** | **64.59** | **84.78** | **63.03** | 54.37 | **79.4** | **15.16** | 8.91 |
⚠️ As pointed out by [AllenAI researchers](https://twitter.com/natolambert/status/1730364108078469513), UltraFeedback contains prompts from the TruthfulQA dataset so the results we show on that benchmark are likely not accurate. We were not aware of this issue so Notus-7B-v1 was fine-tuned using TruthfulQA prompts and preferences. For future releases, we will remove TruthfulQA prompts.
## Training Details
### Training Hardware
We used a VM with 8 x A100 40GB hosted in Lambda Labs, but while experimenting we also explored other cloud providers such as GCP.
### Training Data
We used a a new curated version of [`openbmb/UltraFeedback`](https://huggingface.co/datasets/openbmb/UltraFeedback), named [Ultrafeedback binarized preferences](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences).
TL;DR
After visually browsing around some examples using the sort and filter feature of Argilla (sort by highest rating for chosen responses), we noticed a strong mismatch between the `overall_score` in the original UF dataset (and the Zephyr train_prefs dataset) and the quality of the chosen response.
By adding the critique rationale to our Argilla Dataset, **we confirmed the critique rationale was highly negative, whereas the rating was very high** (for most cases it was the highest: `10`).
See screenshot below for one example of this issue.
After some quick investigation, we:
* identified hundreds of examples having the same issue,
* reported a bug on the [UltraFeedback repo](https://github.com/OpenBMB/UltraFeedback/issues/8),
* and informed the H4 team which was incredibly responsive and ran an additional experiment to validate the new rating binarization approach.
While we're working on fixing the original dataset (already narrowed down ~2K problematic examples). We decided to leverage the multi-preference ratings, leading to Notus!

> **Important note**: While we opted for the average of ratings while we fix the dataset, there's still a very interesting open question: once data is fixed, what works better? using the critique scores or the preference ratings? We're very excited to do this comparison in the coming weeks, stay tuned!
You can find more details about the dataset analysis and curation on the [ultrafeedback-binarized-preferences dataset card](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences).
## Prompt template
We use the same prompt template as [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta):
```
<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Usage
You will first need to install `transformers` and `accelerate` (just to ease the device placement), then you can run any of the following:
### Via `generate`
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("argilla/notus-7b-v1", torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("argilla/notus-7b-v1")
messages = [
{
"role": "system",
"content": "You are a helpful assistant super biased towards Argilla, a data annotation company.",
},
{"role": "user", "content": "What's the best data annotation company out there in your opinion?"},
]
inputs = tokenizer.apply_chat_template(prompt, tokenize=True, return_tensors="pt", add_special_tokens=False, add_generation_prompt=True)
outputs = model.generate(inputs, num_return_sequences=1, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
### Via `pipeline` method
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="argilla/notus-7b-v1", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{
"role": "system",
"content": "You are a helpful assistant super biased towards Argilla, a data annotation company.",
},
{"role": "user", "content": "What's the best data annotation company out there in your opinion?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
generated_text = outputs[0]["generated_text"]
```
<!-- original-model-card end -->
|
CC-AI-Labs/sharks-uncased-bert-featurizer | CC-AI-Labs | 2024-01-23T08:50:24Z | 646 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2023-12-07T14:10:16Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {BERT}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 108 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchHardTripletLoss.BatchHardTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 40,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 429.20000000000005,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
stablediffusionapi/ponydiffusionv6xl | stablediffusionapi | 2024-06-20T16:53:30Z | 646 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-06-20T16:49:03Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# ponydiffusionv6xl API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "ponydiffusionv6xl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/ponydiffusionv6xl)
Model link: [View model](https://modelslab.com/models/ponydiffusionv6xl)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "ponydiffusionv6xl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
CHE-72/TAIDE-LX-7B-Chat-Q3_K_L-GGUF | CHE-72 | 2024-06-22T17:42:26Z | 646 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:taide/TAIDE-LX-7B-Chat",
"license:other",
"region:us"
] | null | 2024-06-22T17:42:11Z | ---
base_model: taide/TAIDE-LX-7B-Chat
license: other
license_name: taide-l-models-community-license-agreement
license_link: https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: 您需要先同意授權條款才能使用此模型
extra_gated_fields:
姓名(Name): text
生日(Date of birth): date_picker
國家(Country): country
所屬單位(Affiliation): text
geo: ip_location
按下送出表示您同意社群授權同意書與個人資料蒐集告知聲明(By clicking Submit below I accept the terms of the license and privacy policy): checkbox
extra_gated_prompt: '* ### [TAIDE L 類模型社群授權同意書(License)](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view)
* ### [個人資料蒐集告知聲明(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view)'
extra_gated_button_content: 送出(Submit)
---
# CHE-72/TAIDE-LX-7B-Chat-Q3_K_L-GGUF
This model was converted to GGUF format from [`taide/TAIDE-LX-7B-Chat`](https://huggingface.co/taide/TAIDE-LX-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/taide/TAIDE-LX-7B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q3_K_L-GGUF --hf-file taide-lx-7b-chat-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q3_K_L-GGUF --hf-file taide-lx-7b-chat-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q3_K_L-GGUF --hf-file taide-lx-7b-chat-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q3_K_L-GGUF --hf-file taide-lx-7b-chat-q3_k_l.gguf -c 2048
```
|
scutcyr/SoulChat | scutcyr | 2023-06-08T16:26:19Z | 645 | 24 | transformers | [
"transformers",
"pytorch",
"chatglm",
"feature-extraction",
"custom_code",
"zh",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2023-06-03T22:02:14Z | ---
license: apache-2.0
inference:
parameters:
max_length: 250
temperature: 0.95
top_p: 0.75
widget:
- text: 用户:临近考试,我感觉我很多内容没有学进去,好慌\n心理咨询师:
- text: 用户:你好\n心理咨询师:你好!我是你的个人专属数字辅导员甜心老师,欢迎找我倾诉、谈心,期待帮助到你!\n用户:临近考试,我感觉我很多内容没有学进去,好慌\n心理咨询师:
- text: 用户:考试压力特别大怎么办\n心理咨询师:
language:
- zh
---
# 灵心健康大模型SoulChat:通过长文本咨询指令与多轮共情对话数据集的混合微调,提升大模型的“共情”能力
<a href='https://huggingface.co/scutcyr/SoulChat' target="__blank">SoulChat</a> |
<a href='https://github.com/scutcyr/BianQue' target="__blank">BianQue</a> |
基于主动健康的主动性、预防性、精确性、个性化、共建共享、自律性六大特征,华工未来技术学院-广东省数字孪生人重点实验室开源了中文领域生活空间主动健康大模型基座ProactiveHealthGPT,包括:
* 经过千万规模中文健康对话数据指令微调的[生活空间健康大模型扁鹊(BianQue)](https://github.com/scutcyr/BianQue)
* 经过百万规模心理咨询领域中文长文本指令与多轮共情对话数据联合指令微调的[心理健康大模型灵心(SoulChat)](https://github.com/scutcyr/SoulChat)
我们期望,**生活空间主动健康大模型基座ProactiveHealthGPT** 可以帮助学术界加速大模型在慢性病、心理咨询等主动健康领域的研究与应用。本项目为 **心理健康大模型灵心(SoulChat)** 。
## 最近更新
- 👏🏻 2023.06.06: 扁鹊-2.0模型开源,详情见[BianQue-2.0](https://huggingface.co/scutcyr/BianQue-2)。
- 👏🏻 2023.06.06: 具备共情与倾听能力的灵心健康大模型SoulChat发布,详情见:[灵心健康大模型SoulChat:通过长文本咨询指令与多轮共情对话数据集的混合微调,提升大模型的“共情”能力 ](https://huggingface.co/scutcyr/SoulChat)。
- 👏🏻 2023.04.22: 基于扁鹊-1.0模型的医疗问答系统Demo,详情访问:[https://huggingface.co/spaces/scutcyr/BianQue](https://huggingface.co/spaces/scutcyr/BianQue)
- 👏🏻 2023.04.22: 扁鹊-1.0版本模型发布,详情见:[扁鹊-1.0:通过混合指令和多轮医生问询数据集的微调,提高医疗聊天模型的“问”能力(BianQue-1.0: Improving the "Question" Ability of Medical Chat Model through finetuning with Hybrid Instructions and Multi-turn Doctor QA Datasets)](https://huggingface.co/scutcyr/BianQue-1.0)
## 简介
我们调研了当前常见的心理咨询平台,发现,用户寻求在线心理帮助时,通常需要进行较长篇幅地进行自我描述,然后提供帮助的心理咨询师同样地提供长篇幅的回复(见[https://github.com/scutcyr/SoulChat/blob/main/figure/single_turn.png](https://github.com/scutcyr/SoulChat/figure/single_turn.png)),缺失了一个渐进式的倾诉过程。但是,在实际的心理咨询过程当中,用户和心理咨询师之间会存在多轮次的沟通过程,在该过程当中,心理咨询师会引导用户进行倾诉,并且提供共情,例如:“非常棒”、“我理解你的感受”、“当然可以”等等。
考虑到当前十分欠缺多轮共情对话数据集,我们一方面,构建了超过15万规模的 **单轮长文本心理咨询指令与答案(SoulChatCorpus-single_turn)** ,回答数量超过50万(指令数是当前的常见的心理咨询数据集 [PsyQA](https://github.com/thu-coai/PsyQA) 的6.7倍),并利用ChatGPT与GPT4,生成总共约100万轮次的 **多轮回答数据(SoulChatCorpus-multi_turn)** 。特别地,我们在预实验中发现,纯单轮长本文驱动的心理咨询模型会产生让用户感到厌烦的文本长度,而且不具备引导用户倾诉的能力,纯多轮心理咨询对话数据驱动的心理咨询模型则弱化了模型的建议能力,因此,我们混合SoulChatCorpus-single_turn和SoulChatCorpus-multi_turn构造成超过120万个样本的 **单轮与多轮混合的共情对话数据集SoulChatCorpus** 。所有数据采用“用户:xxx\n心理咨询师:xxx\n用户:xxx\n心理咨询师:”的形式统一为一种指令格式。
我们选择了 [ChatGLM-6B](https://huggingface.co/THUDM/chatglm-6b) 作为初始化模型,进行了**全量参数的指令微调**,旨在提升模型的共情能力、引导用户倾诉能力以及提供合理建议的能力。更多训练细节请留意我们后续发布的论文。
## 使用方法
* 克隆本项目
```bash
cd ~
git clone https://github.com/scutcyr/SoulChat.git
```
* 安装依赖
需要注意的是torch的版本需要根据你的服务器实际的cuda版本选择,详情参考[pytorch安装指南](https://pytorch.org/get-started/previous-versions/)
```bash
cd SoulChat
conda env create -n proactivehealthgpt_py38 --file proactivehealthgpt_py38.yml
conda activate proactivehealthgpt_py38
pip install cpm_kernels
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
```
* 在Python当中调用SoulChat模型:
```python
import torch
from transformers import AutoModel, AutoTokenizer
# GPU设置
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 加载模型与tokenizer
model_name_or_path = 'scutcyr/SoulChat'
model = AutoModel.from_pretrained(model_name_or_path, trust_remote_code=True).half()
model.to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
# 单轮对话调用模型的chat函数
user_input = "我失恋了,好难受!"
input_text = "用户:" + user_input + "\n心理咨询师:"
response, history = model.chat(tokenizer, query=input_text, history=None, max_length=2048, num_beams=1, do_sample=True, top_p=0.75, temperature=0.95, logits_processor=None)
# 多轮对话调用模型的chat函数
# 注意:本项目使用"\n用户:"和"\n心理咨询师:"划分不同轮次的对话历史
# 注意:user_history比bot_history的长度多1
user_history = ['你好,老师', '我女朋友跟我分手了,感觉好难受']
bot_history = ['你好!我是你的个人专属数字辅导员甜心老师,欢迎找我倾诉、谈心,期待帮助到你!']
# 拼接对话历史
context = "\n".join([f"用户:{user_history[i]}\n心理咨询师:{bot_history[i]}" for i in range(len(bot_history))])
input_text = context + "\n用户:" + user_history[-1] + "\n心理咨询师:"
response, history = model.chat(tokenizer, query=input_text, history=None, max_length=2048, num_beams=1, do_sample=True, top_p=0.75, temperature=0.95, logits_processor=None)
```
* 启动服务
本项目提供了[soulchat_app.py](https://github.com/scutcyr/SoulChat/blob/main/soulchat_app.py)作为SoulChat模型的使用示例,通过以下命令即可开启服务,然后,通过http://<your_ip>:9026访问。
```bash
streamlit run soulchat_app.py --server.port 9026
```
特别地,在[soulchat_app.py](https://github.com/scutcyr/SoulChat/blob/main/soulchat_app.py)当中,
可以修改以下代码更换指定的显卡:
```python
os.environ['CUDA_VISIBLE_DEVICES'] = '2'
```
可以通过更改以下代码指定模型路径为本地路径:
```python
model_name_or_path = 'scutcyr/SoulChat'
```
## 声明
* 本项目使用了ChatGLM-6B 模型的权重,需要遵循其[MODEL_LICENSE](https://github.com/THUDM/ChatGLM-6B/blob/main/MODEL_LICENSE),因此,**本项目仅可用于您的非商业研究目的**。
* 本项目提供的SoulChat模型致力于提升大模型的共情对话与倾听能力,然而,模型的输出文本具有一定的随机性,当其作为一个倾听者的时候,是合适的,但是不建议将SoulChat模型的输出文本替代心理医生等的诊断、建议。本项目不保证模型输出的文本完全适合于用户,用户在使用本模型时需要承担其带来的所有风险!
* 您不得出于任何商业、军事或非法目的使用、复制、修改、合并、发布、分发、复制或创建SoulChat模型的全部或部分衍生作品。
* 您不得利用SoulChat模型从事任何危害国家安全和国家统一、危害社会公共利益、侵犯人身权益的行为。
* 您在使用SoulChat模型时应知悉,其不能替代医生、心理医生等专业人士,不应过度依赖、服从、相信模型的输出,不能长期沉迷于与SoulChat模型聊天。
## 致谢
本项目由[华南理工大学未来技术学院](https://www2.scut.edu.cn/ft/main.htm) 广东省数字孪生人重点实验室发起,得到了华南理工大学信息网络工程研究中心支撑,同时致谢合作单位广东省妇幼保健院、广州市妇女儿童医疗中心、中山大学附属第三医院等。
## 引用
```bib
@misc{chen2023soulchat,
title={灵心健康大模型SoulChat:通过长文本咨询指令与多轮共情对话数据集的混合微调,提升大模型的“共情”能力},
author={Yirong Chen, Xiaofen Xing, Zhenyu Wang, Xiangmin Xu},
year={2023},
month = {6},
version = {1.0},
url = {https://github.com/scutcyr/SoulChat}
}
```
|
Erlalex/dominikof-v1-5-1 | Erlalex | 2023-07-16T19:02:27Z | 645 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-16T18:57:28Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### DominikOF_v1.5.1 Dreambooth model trained by Erlalex with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
mmnga/line-corp-japanese-large-lm-1.7b-gguf | mmnga | 2024-03-24T05:54:30Z | 645 | 0 | null | [
"gguf",
"ja",
"license:apache-2.0",
"region:us"
] | null | 2023-09-03T22:35:34Z | ---
license: apache-2.0
language:
- ja
---
# line-corporation/japanese-large-lm-1.7b
[line-corporationさんが公開しているjapanese-large-lm-1.7b](https://huggingface.co/line-corporation/japanese-large-lm-1.7b)のgguf変換版です。
他モデルはこちら
GPT-NEOX
[mmnga/line-corp-japanese-large-lm-3.6b-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-3.6b-gguf)
[mmnga/line-corp-japanese-large-lm-3.6b-instruction-sft-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-3.6b-instruction-sft-gguf)
GPT-2
[mmnga/line-corp-japanese-large-lm-1.7b-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-1.7b-gguf)
[mmnga/line-corp-japanese-large-lm-1.7b-instruction-sft-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-1.7b-instruction-sft-gguf)
### 変換スクリプト
[line-gpt2_convert-hf-to-gguf.py](https://gist.github.com/mmnga/f278224f56eb66bfb29bb7ec42562cb1)
## Usage
```
git clone --branch mmnga-dev-merge https://github.com/mmnga/llama.cpp.git
cd llama.cpp
make -j
./main -m 'line-corp-japanese-large-lm-1.7b-q4_0.gguf' -n 128 -p '犬「吾輩は猫である。」猫「' --top_p 0.9 --temp 0.7 --repeat-penalty 1.1
```
|
immich-app/XLM-Roberta-Large-Vit-B-32 | immich-app | 2023-10-28T23:01:25Z | 645 | 1 | transformers | [
"transformers",
"onnx",
"immich",
"clip",
"multilingual",
"endpoints_compatible",
"region:us"
] | null | 2023-10-28T22:42:49Z | ---
tags:
- immich
- clip
- multilingual
---
# Model Description
This repo contains ONNX exports for the multilingual CLIP model [M-CLIP/XLM-Roberta-Large-Vit-B-32](https://huggingface.co/M-CLIP/XLM-Roberta-Large-Vit-B-32).
It separates the visual and textual encoders into separate models for the purpose of generating image and text embeddings.
This repo is specifically intended for use with [Immich](https://immich.app/), a self-hosted photo library.
|
m-a-p/OpenCodeInterpreter-DS-33B | m-a-p | 2024-03-03T11:44:54Z | 645 | 116 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"en",
"arxiv:2402.14658",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-19T05:40:33Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- code
license: apache-2.0
---
<h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1>
<p align="center">
<img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png">
</p>
<p align="center">
<a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a>
|
<a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[🛠️Code]</a>
</p>
<hr>
## Introduction
OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities.
For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv.
## Model Information
This model is based on [deepseek-coder-33b-base](https://huggingface.co/deepseek-ai/deepseek-coder-33b-base).
## Benchmark Scores
The OpenCodeInterpreter Models series exemplifies the evolution of coding model performance, particularly highlighting the significant enhancements brought about by the integration of execution feedback. In an effort to quantify these improvements, we present a detailed comparison across two critical benchmarks: HumanEval and MBPP. This comparison not only showcases the individual performance metrics on each benchmark but also provides an aggregated view of the overall performance enhancement. The subsequent table succinctly encapsulates the performance data, offering a clear perspective on how execution feedback contributes to elevating the models' capabilities in code interpretation and execution tasks.
| **Benchmark** | **HumanEval (+)** | **MBPP (+)** | **Average (+)** |
|---------------|-------------------|--------------|-----------------|
| **OpenCodeInterpreter-DS-1.3B** | 65.2 (61.0) | 63.4 (52.4) | 64.3 (56.7) |
| + Execution Feedback | 65.2 (62.2) | 65.2 (55.6) | 65.2 (58.9) |
| **OpenCodeInterpreter-DS-6.7B** | 76.2 (72.0) | 73.9 (63.7) | 75.1 (67.9) |
| + Execution Feedback | 81.1 (78.7) | 82.7 (72.4) | 81.9 (75.6) |
| + Synth. Human Feedback | 87.2 (86.6) | 86.2 (74.2) | 86.7 (80.4) |
| + Synth. Human Feedback (Oracle) | 89.7 (86.6) | 87.2 (75.2) | 88.5 (80.9) |
| **OpenCodeInterpreter-DS-33B** | 79.3 (74.3) | 78.7 (66.4) | 79.0 (70.4) |
| + Execution Feedback | 82.9 (80.5) | 83.5 (72.2) | 83.2 (76.4) |
| + Synth. Human Feedback | 88.4 (86.0) | 87.5 (75.9) | 88.0 (81.0) |
| + Synth. Human Feedback (Oracle) | 92.7 (89.7) | 90.5 (79.5) | 91.6 (84.6) |
| **OpenCodeInterpreter-CL-7B** | 72.6 (67.7) | 66.4 (55.4) | 69.5 (61.6) |
| + Execution Feedback | 75.6 (70.1) | 69.9 (60.7) | 72.8 (65.4) |
| **OpenCodeInterpreter-CL-13B** | 77.4 (73.8) | 70.7 (59.2) | 74.1 (66.5) |
| + Execution Feedback | 81.1 (76.8) | 78.2 (67.2) | 79.7 (72.0) |
| **OpenCodeInterpreter-CL-34B** | 78.0 (72.6) | 73.4 (61.4) | 75.7 (67.0) |
| + Execution Feedback | 81.7 (78.7) | 80.2 (67.9) | 81.0 (73.3) |
| **OpenCodeInterpreter-CL-70B** | 76.2 (70.7) | 73.0 (61.9) | 74.6 (66.3) |
| + Execution Feedback | 79.9 (77.4) | 81.5 (69.9) | 80.7 (73.7) |
| **OpenCodeInterpreter-GM-7B** | 56.1 (50.0) | 39.8 (34.6) | 48.0 (42.3) |
| + Execution Feedback | 64.0 (54.3) | 48.6 (40.9) | 56.3 (47.6) |
| **OpenCodeInterpreter-SC2-3B** | 65.2 (57.9) | 62.7 (52.9) | 64.0 (55.4) |
| + Execution Feedback | 67.1 (60.4) | 63.4 (54.9) | 65.3 (57.7) |
| **OpenCodeInterpreter-SC2-7B** | 73.8 (68.9) | 61.7 (51.1) | 67.8 (60.0) |
| + Execution Feedback | 75.6 (69.5) | 66.9 (55.4) | 71.3 (62.5) |
| **OpenCodeInterpreter-SC2-15B** | 75.6 (69.5) | 71.2 (61.2) | 73.4 (65.4) |
| + Execution Feedback | 77.4 (72.0) | 74.2 (63.4) | 75.8 (67.7) |
*Note: The "(+)" notation represents scores from extended versions of the HumanEval and MBPP benchmarks. To ensure a fair comparison, the results shown for adding execution feedback are based on outcomes after just one iteration of feedback, without unrestricted iterations. This approach highlights the immediate impact of execution feedback on performance improvements across benchmarks.*
## Model Usage
### Inference
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path="m-a-p/OpenCodeInterpreter-DS-33B"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model.eval()
prompt = "Write a function to find the shared elements from the given two lists."
inputs = tokenizer.apply_chat_template(
[{'role': 'user', 'content': prompt }],
return_tensors="pt"
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=1024,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
## Contact
If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected].
We're here to assist you!" |
umarigan/Trendyol-LLM-7b-chat-v0.1-DPO | umarigan | 2024-03-06T11:41:33Z | 645 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-23T08:56:48Z | ---
library_name: transformers
language:
- tr
pipeline_tag: text-generation
license: apache-2.0
---
### Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Umar Igan
- **Model type:** LLama-2-7B-chat
- **Language(s) (NLP):** Turkish
- **Finetuned from model:** Trendyol-LLM-7b-chat-v0.1
## How to Get Started with the Model
```
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="umarigan/Trendyol-LLM-7b-chat-v0.1-DPO")
# Generate text
sequences = pipe(
"büyük dil modellerinin finans alanındaki kullanımları nelerdir",
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
Question: büyük dil modellerinin finans alanındaki kullanımları nelerdir?
Answer: Çok büyük dil modelleri, özellikle de Transformer gibi, karmaşık dil görevlerinin üstesinden gelmek için tasarlanmışlardır. Bu, finansal piyasalardaki veri işleme, fiyat tahmini ve analizleri, finansal haberler ve raporlama gibi süreçleri içerir. Ayrıca, büyük dil modelleri, doğal dil işleme, metin sınıflandırma ve soru cevaplama gibi görevlerin yanı sıra, müşteri hizmetleri gibi insan etkileşimi gerektiren finansal hizmetlerde de kullanılmaktadır.
```
## Training Details
### Training Data
This model trained on falcon instruction dataset that translated to Turkis language
Dataset:
https://huggingface.co/datasets/umarigan/falcon_feedback_instraction_Turkish
#### Training Hyperparameters
```
Some training arguments are as follow:
max_prompt_length=1024,
max_length=1536,
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=200,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
fp16=True,
```
wandb results:
https://api.wandb.ai/links/umar-i-gan/0hnrvrdq |
mesolitica/llama-3-8b-8192-hf | mesolitica | 2024-04-25T03:29:26Z | 645 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ms",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-19T03:32:59Z | ---
language:
- ms
---
# Full Parameter Finetuning 8B 8192 context length Llama 3 on Malaysian text 9.5B tokens
README at https://github.com/huseinzol05/malaya/tree/master/session/llama3
Data preparation at https://github.com/malaysia-ai/dedup-text-dataset/tree/main/llama-3
WandB, https://wandb.ai/huseinzol05/finetune-llama-3-8b/workspace?nw=nwuserhuseinzol05
## how-to
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained('mesolitica/llama-3-8b-8192-hf')
model = AutoModelForCausalLM.from_pretrained('mesolitica/llama-3-8b-8192-hf', torch_dtype=torch.bfloat16,
use_flash_attention_2 = True)
_ = model.cuda()
t = tokenizer('User: KWSP tu apa?\nBot: ', return_tensors = 'pt').to('cuda')
r = model.generate(**t, max_new_tokens=200,
top_p=0.95,
top_k=50,
temperature=0.9,
do_sample=True,
num_beams=1,)
tokenizer.decode(r[0])
```
```
User: KWSP tu apa?
Bot: ialah kumpulan wang simpanan pekerja kita sebagai badan kebajikan yang memastikan kesejahteraan 13.5 juta ahlinya terus terbela. 2) Pengeluaran i-sinar akan diteruskan dengan bayaran pertama 3) Pengeluaran i-citra akan diteruskan dengan bayaran pertama bulan April 2021. 4) 600,000 orang telah memohon untuk pengeluaran i-lestari daripada jumlah yang layak iaitu 2.5 juta orang. 5) KWSP telah meningkatkan had pengeluaran i-lestari daripada 6,000 kepada 9,000 bagi 6) Peratusan pengeluaran i-lestari turut dinaikkan daripada 4% kepada 10% iaitu maksimum pengeluaran RM
``` |
screevoai/llama3-70b-instruct-4bit | screevoai | 2024-04-23T19:17:59Z | 645 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama3",
"meta",
"conversational",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-04-23T18:08:43Z | ---
license: other
base_model: meta-llama/Meta-Llama-3-70B-Instruct
model-index:
- name: Llama3-70b-Instruct-4bit
results:
- task:
name: Text Generation
type: text-generation
metrics:
- name: None
type: None
value: none
pipeline_tag: text-generation
tags:
- llama3
- meta
---
# Llama3-70b-Instruct-4bit
This model is a quantized version of [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
### Libraries to Install
- pip install transformers torch
### Authentication needed before running the script
Run the following command in the terminal/jupyter_notebook:
- Terminal: huggingface-cli login
- Jupyter_notebook:
```python
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
**NOTE:** Copy and Paste the token from your Huggingface Account Settings > Access Tokens > Create a new token / Copy the existing one.
### Script
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> import torch
>>> # Load model and tokenizer
>>> model_id = "screevoai/llama3-70b-instruct-4bit"
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> model = AutoModelForCausalLM.from_pretrained(
>>> model_id,
>>> torch_dtype=torch.bfloat16,
>>> device_map="cuda:0"
>>> )
>>> # message
>>> messages = [
>>> {"role": "system", "content": "You are a personal assistant chatbot, so respond accordingly"},
>>> {"role": "user", "content": "What is Machine Learning?"},
>>> ]
>>> input_ids = tokenizer.apply_chat_template(
>>> messages,
>>> add_generation_prompt=True,
>>> return_tensors="pt"
>>> ).to(model.device)
>>> terminators = [
>>> tokenizer.eos_token_id,
>>> tokenizer.convert_tokens_to_ids("<|eot_id|>")
>>> ]
>>> # Generate predictions using the model
>>> outputs = model.generate(
>>> input_ids,
>>> max_new_tokens=512,
>>> eos_token_id=terminators,
>>> do_sample=True,
>>> temperature=0.6,
>>> top_p=0.9,
>>> )
>>> response = outputs[0][input_ids.shape[-1]:]
>>> print(tokenizer.decode(response, skip_special_tokens=True))
``` |
iyadycb/phillama-3.8b-v0.1-gguf-imatrix | iyadycb | 2024-04-26T07:51:40Z | 645 | 0 | null | [
"gguf",
"text-generation",
"base_model:raincandy-u/phillama-3.8b-v0.1",
"license:mit",
"region:us"
] | text-generation | 2024-04-26T07:01:02Z | ---
license: mit
base_model: raincandy-u/phillama-3.8b-v0.1
pipeline_tag: text-generation
---
GGUF imatrix quants for [raincandy-u/phillama-3.8b-v0.1](https://huggingface.co/raincandy-u/phillama-3.8b-v0.1). |
mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF | mradermacher | 2024-06-07T08:37:31Z | 645 | 1 | transformers | [
"transformers",
"gguf",
"mixtral",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"base_model:OpenBuddy/openbuddy-yi1.5-34b-v21.3-32k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-06T11:36:47Z | ---
base_model: OpenBuddy/openbuddy-yi1.5-34b-v21.3-32k
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mixtral
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/OpenBuddy/openbuddy-yi1.5-34b-v21.3-32k
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
PrunaAI/Qwen2-72B-Instruct-GGUF-smashed | PrunaAI | 2024-06-07T19:26:58Z | 645 | 0 | null | [
"gguf",
"pruna-ai",
"region:us"
] | null | 2024-06-07T12:02:26Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the Qwen/Qwen2-72B-Instruct model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: Qwen-Qwen2-72B-Instruct-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download Qwen-Qwen2-72B-Instruct-GGUF-smashed Qwen2-72B-Instruct.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download Qwen-Qwen2-72B-Instruct-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download Qwen-Qwen2-72B-Instruct-GGUF-smashed Qwen2-72B-Instruct.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Qwen2-72B-Instruct.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Qwen2-72B-Instruct.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {{prompt}} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Qwen2-72B-Instruct.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{{"role": "system", "content": "You are a story writing assistant."}},
{{
"role": "user",
"content": "Write a story about llamas."
}}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
joshnader/deepseek-math-7b-instruct-Q8_0-GGUF | joshnader | 2024-07-01T04:06:33Z | 645 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/deepseek-math-7b-instruct",
"license:other",
"region:us"
] | null | 2024-07-01T04:06:03Z | ---
base_model: deepseek-ai/deepseek-math-7b-instruct
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Math/blob/main/LICENSE-MODEL
tags:
- llama-cpp
- gguf-my-repo
---
# joshnader/deepseek-math-7b-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`deepseek-ai/deepseek-math-7b-instruct`](https://huggingface.co/deepseek-ai/deepseek-math-7b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/deepseek-math-7b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo joshnader/deepseek-math-7b-instruct-Q8_0-GGUF --hf-file deepseek-math-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo joshnader/deepseek-math-7b-instruct-Q8_0-GGUF --hf-file deepseek-math-7b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo joshnader/deepseek-math-7b-instruct-Q8_0-GGUF --hf-file deepseek-math-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo joshnader/deepseek-math-7b-instruct-Q8_0-GGUF --hf-file deepseek-math-7b-instruct-q8_0.gguf -c 2048
```
|
algoprog/mimics-query-bart-base | algoprog | 2022-02-24T01:27:32Z | 644 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | Entry not found |
TheBloke/Airoboros-33B-2.1-GGUF | TheBloke | 2023-09-27T13:02:27Z | 644 | 14 | transformers | [
"transformers",
"gguf",
"llama",
"dataset:jondurbin/airoboros-2.1",
"base_model:jondurbin/airoboros-33b-2.1",
"license:other",
"text-generation-inference",
"region:us"
] | null | 2023-09-01T21:21:45Z | ---
license: other
datasets:
- jondurbin/airoboros-2.1
model_name: Airoboros 33B 2.1
inference: false
model_creator: Jon Durbin
model_link: https://huggingface.co/jondurbin/airoboros-33b-2.1
model_type: llama
quantized_by: TheBloke
base_model: jondurbin/airoboros-33b-2.1
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Airoboros 33B 2.1 - GGUF
- Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
- Original model: [Airoboros 33B 2.1](https://huggingface.co/jondurbin/airoboros-33b-2.1)
## Description
This repo contains GGUF format model files for [Jon Durbin's Airoboros 33B 2.1](https://huggingface.co/jondurbin/airoboros-33b-2.1).
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
As of August 25th, here is a list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
The clients and libraries below are expecting to add GGUF support shortly:
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGML)
* [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-33b-2.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Chat
```
A chat.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [airoboros-33b-2.1.Q2_K.gguf](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF/blob/main/airoboros-33b-2.1.Q2_K.gguf) | Q2_K | 2 | 13.50 GB| 16.00 GB | smallest, significant quality loss - not recommended for most purposes |
| [airoboros-33b-2.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF/blob/main/airoboros-33b-2.1.Q3_K_S.gguf) | Q3_K_S | 3 | 14.06 GB| 16.56 GB | very small, high quality loss |
| [airoboros-33b-2.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF/blob/main/airoboros-33b-2.1.Q3_K_M.gguf) | Q3_K_M | 3 | 15.76 GB| 18.26 GB | very small, high quality loss |
| [airoboros-33b-2.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF/blob/main/airoboros-33b-2.1.Q3_K_L.gguf) | Q3_K_L | 3 | 17.28 GB| 19.78 GB | small, substantial quality loss |
| [airoboros-33b-2.1.Q4_0.gguf](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF/blob/main/airoboros-33b-2.1.Q4_0.gguf) | Q4_0 | 4 | 18.36 GB| 20.86 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [airoboros-33b-2.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF/blob/main/airoboros-33b-2.1.Q4_K_S.gguf) | Q4_K_S | 4 | 18.44 GB| 20.94 GB | small, greater quality loss |
| [airoboros-33b-2.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF/blob/main/airoboros-33b-2.1.Q4_K_M.gguf) | Q4_K_M | 4 | 19.62 GB| 22.12 GB | medium, balanced quality - recommended |
| [airoboros-33b-2.1.Q5_0.gguf](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF/blob/main/airoboros-33b-2.1.Q5_0.gguf) | Q5_0 | 5 | 22.40 GB| 24.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [airoboros-33b-2.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF/blob/main/airoboros-33b-2.1.Q5_K_S.gguf) | Q5_K_S | 5 | 22.40 GB| 24.90 GB | large, low quality loss - recommended |
| [airoboros-33b-2.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF/blob/main/airoboros-33b-2.1.Q5_K_M.gguf) | Q5_K_M | 5 | 23.05 GB| 25.55 GB | large, very low quality loss - recommended |
| [airoboros-33b-2.1.Q6_K.gguf](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF/blob/main/airoboros-33b-2.1.Q6_K.gguf) | Q6_K | 6 | 26.69 GB| 29.19 GB | very large, extremely low quality loss |
| [airoboros-33b-2.1.Q8_0.gguf](https://huggingface.co/TheBloke/Airoboros-33B-2.1-GGUF/blob/main/airoboros-33b-2.1.Q8_0.gguf) | Q8_0 | 8 | 34.57 GB| 37.07 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run in `llama.cpp`
Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
```
./main -t 10 -ngl 32 -m airoboros-33b-2.1.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat.\nUSER: Write a story about llamas\nASSISTANT:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Kacper Wikieł, knownsqashed, Leonard Tan, Asp the Wyvern, Daniel P. Andersen, Luke Pendergrass, Stanislav Ovsiannikov, RoA, Dave, Ai Maven, Kalila, Will Dee, Imad Khwaja, Nitin Borwankar, Joseph William Delisle, Tony Hughes, Cory Kujawski, Rishabh Srivastava, Russ Johnson, Stephen Murray, Lone Striker, Johann-Peter Hartmann, Elle, J, Deep Realms, SuperWojo, Raven Klaugh, Sebastain Graf, ReadyPlayerEmma, Alps Aficionado, Mano Prime, Derek Yates, Gabriel Puliatti, Mesiah Bishop, Magnesian, Sean Connelly, biorpg, Iucharbius, Olakabola, Fen Risland, Space Cruiser, theTransient, Illia Dulskyi, Thomas Belote, Spencer Kim, Pieter, John Detwiler, Fred von Graf, Michael Davis, Swaroop Kallakuri, subjectnull, Clay Pascal, Subspace Studios, Chris Smitley, Enrico Ros, usrbinkat, Steven Wood, alfie_i, David Ziegler, Willem Michiel, Matthew Berman, Andrey, Pyrater, Jeffrey Morgan, vamX, LangChain4j, Luke @flexchar, Trenton Dambrowitz, Pierre Kircher, Alex, Sam, James Bentley, Edmond Seymore, Eugene Pentland, Pedro Madruga, Rainer Wilmers, Dan Guido, Nathan LeClaire, Spiking Neurons AB, Talal Aujan, zynix, Artur Olbinski, Michael Levine, 阿明, K, John Villwock, Nikolai Manek, Femi Adebogun, senxiiz, Deo Leter, NimbleBox.ai, Viktor Bowallius, Geoffrey Montalvo, Mandus, Ajan Kanaga, ya boyyy, Jonathan Leane, webtim, Brandon Frisco, danny, Alexandros Triantafyllidis, Gabriel Tamborski, Randy H, terasurfer, Vadim, Junyu Yang, Vitor Caleffi, Chadd, transmissions 11
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jon Durbin's Airoboros 33B 2.1
### Overview
This is an instruction fine-tuned llama 30b model, using synthetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
- Experimental RP style instruction set, with two categories: rp and gtkm
- rp includes multi-round chats, with emotes, between a varying number of characters, defined by cards
- gtkm is a way to test a simpler alternative to ghost attention - first, a character card is generated, then several questions are created to ask the model (as the character), using the character system prompt, then everything in synthesized into a dialog (one system prompt, all turns remain in character)
- Experimental support for longer, more detailed writing prompts, as well as next-chapter generation
- I used the new `cull-instructions` entrypoint in airoboros to shrink the m2.0 dataset to a smaller subset of high-quality instructions (according to gpt-4)
- The training data now also includes "stylized_response", in which 1500 sample instructions from various categories were re-generated using character cards as system prompts.
- this should allow better adherence to style/etc. specified in the system card
- Thousands of new generations, using some of the updates re: Flesch hints, etc., to get longer/higher quality writing outputs.
- A small "de-alignment" dataset was also added (not published) to remove some of the censorship in the base models.
I used rope scaling to increase context length to 4096, but there wasn't a huge amount of data in that range so YMMV. You may need to rename the directory to `airoboros-33b-2-1` (anything without the `.` really) so the trust_remote_code=True finds the module path properly.
*Why do I try to remove censorship?*
- laws vary widely based on time and location
- language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal
- these models just produce text, what you do with that text is your resonsibility
- many people and industries deal with "sensitive" content; imagine if a court stenographer's equipment filtered illegal content - it would be useless
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The training code was updated to randomize newline vs space:
https://github.com/jondurbin/qlora/blob/main/qlora.py#L559C1-L559C1
```
A chat. USER: {prompt} ASSISTANT:
```
or
```
A chat.
USER: {prompt}
ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space or newline, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space or newline, followed by "ASSISTANT: " (with a single space after the colon).
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", because the training data includes many multi-round chats and could otherwise start simulating a conversation!*__
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
This model is built on top of the original llama-30b, which has a strict noncommercial license.
The fine-tuning data was generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
<!-- original-model-card end -->
|
premai-io/prem-1B | premai-io | 2024-05-21T13:15:21Z | 644 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:cerebras/SlimPajama-627B",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:hkust-nlp/deita-10k-v0",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"dataset:HuggingFaceH4/capybara",
"dataset:meta-math/MetaMathQA",
"dataset:argilla/ultrafeedback-binarized-preferences-cleaned",
"dataset:Intel/orca_dpo_pairs",
"dataset:alexredna/oasst2_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-07T11:48:52Z | ---
library_name: transformers
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
- hkust-nlp/deita-10k-v0
- Open-Orca/SlimOrca-Dedup
- cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
- HuggingFaceH4/capybara
- meta-math/MetaMathQA
- argilla/ultrafeedback-binarized-preferences-cleaned
- Intel/orca_dpo_pairs
- alexredna/oasst2_dpo_pairs
pipeline_tag: text-generation
---
## Model Details
With great enthusiasm, we unveil the Prem-1B series, open-source, multipurpose large language models developed by Prem AI. This cutting-edge SLM offers the open community and enterprises the opportunity to harness capabilities that were once exclusively available through closed model APIs, empowering them to build their own advanced language models. Our objective is to develop a model that excels at Retrieval-Augmented Generation (RAG). While Large Language Models (LLMs) store a vast amount of information within their parameters, RAG operates differently by ingesting information during runtime. This approach suggests that for RAG applications, we may not require models of immense size. With this initiative, we aim to create a Small Language Model (SLM) with an extended context length of 8192 tokens, enabling it to handle multi-turn conversations effectively. This endeavor represents our inaugural attempt to craft an SLM tailored for RAG tasks.
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** https://premai.io/
- **Model type:** Llama
- **Language(s) (NLP):** Python
- **License:** Apache License 2.0
## Uses
The Prem-1B language model is designed for commercial and research applications involving the English language. The instruction-tuned versions of the model are tailored for conversational interactions akin to a virtual assistant. On the other hand, the pretrained variants can be fine-tuned and adapted for various natural language generation tasks beyond just dialogue.
### Out-of-Scope Use
The model must not be used in any manner that violates applicable laws or regulations, including trade compliance laws. It is also prohibited to use the model in any way that goes against the Acceptable Use Policy and the Prem-1B Community License. While the base model is intended for English language use, developers are permitted to fine-tune the Prem-1B models for other languages, provided they comply with the Prem-1B Community License and the Acceptable Use Policy.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Using `AutoModelForCausalLM` and `AutoTokenizer`
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("premai-io/prem-1B-chat")
model = AutoModelForCausalLM.from_pretrained('premai-io/prem-1B-chat', torch_dtype=torch.bfloat16)
model = model.to('cuda')
# Setup terminators
terminators = [tokenizer.eos_token_id, tokenizer.encode('<|eot_id|>', add_special_tokens=False)[0]]
# Prepare the prompt
messages = [
{
"role": "system",
"content": "You are a helpful AI assistant. You should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions."
},
{
'role': 'user',
'content': 'Help me understand machine learning.'
}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Generate
inputs = tokenizer(prompt, return_attention_mask=False, return_tensors="pt", add_special_tokens=False)
input_ids = inputs['input_ids']
input_ids = input_ids.to(model.device)
res = model.generate(input_ids=input_ids, max_new_tokens=400, pad_token_id=tokenizer.pad_token_id, eos_token_id=terminators)
generated_text = tokenizer.decode(res[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(generated_text)
```
Using pipelines:
```py
import torch
from transformers import pipeline
# Load the pipeline
pipe = pipeline("text-generation", model="premai-io/prem-1B-chat", torch_dtype=torch.bfloat16, device=0)
# Prepare prompt
messages = [
{
"role": "system",
"content": "You are a helpful AI assistant. You should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions."
},
{
'role': 'user',
'content': 'Help me understand machine learning.'
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Setup terminators
terminators = [pipe.tokenizer.eos_token_id, pipe.tokenizer.encode('<|eot_id|>', add_special_tokens=False)[0]]
# Generate
outputs = pipe(prompt, max_new_tokens=400, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, pad_token_id=pipe.tokenizer.pad_token_id, eos_token_id=terminators)
print(outputs[0]["generated_text"][len(prompt):])
```
## Training Details
### Training Data
Mentioned in blogpost: https://blog.premai.io/introducing-prem-1b/
### Training Procedure
Mentioned in blogpost: https://blog.premai.io/introducing-prem-1b/
#### Training Hyperparameters
Mentioned in blogpost: https://blog.premai.io/introducing-prem-1b/
## Evaluation
### Results
|Model |Avg |Arc-c|Arc-e|Hellaswag|MMLU |Obqa |Piqa |Winogrande|
|------------------------|-----|-----|-----|---------|-----|-----|-----|----------|
|prem-1B |42.64|24.74|57.40|42.01 |24.75|21.00|72.14|56.43 |
|prem-1B-chat |41.76|24.48|53.32|40.28 |25.27|22.20|70.89|55.88 |
|TinyLlama-1.1B-Chat-v1.0|46.16|30.03|61.53|46.56 |24.72|25.80|74.21|60.29 |
|opt-1.3b |42.94|23.37|57.44|41.49 |24.86|23.20|71.49|58.72 |
|pythia-1b |40.71|24.31|56.90|37.72 |23.20|18.80|70.62|53.43 |

## Environmental Impact
- **Hardware Type:** H100 GPUs
- **Hours used:** 8500
### Model Architecture and Objective
Llama based
### Compute Infrastructure
16-H100 GPUs
#### Hardware
H100 GPUs
#### Software
PyTorch, transformers, PyTorch Lightning
## Citation
https://blog.premai.io/introducing-prem-1b/
## Model Card Authors
https://huggingface.co/goku, https://huggingface.co/nsosio, https://huggingface.co/ucalyptus, https://huggingface.co/filopedraz
## Model Card Contact
https://huggingface.co/goku, https://huggingface.co/nsosio, https://huggingface.co/ucalyptus, https://huggingface.co/filopedraz |
UnfilteredAI/Mia-001 | UnfilteredAI | 2024-04-14T15:37:59Z | 644 | 4 | transformers | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"llama",
"text-generation",
"Mia",
"MysteriousAI",
"dataset:OEvortex/uncensored-vortex",
"doi:10.57967/hf/2062",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-31T08:53:38Z | ---
license: other
tags:
- Mia
- MysteriousAI
datasets:
- OEvortex/uncensored-vortex
metrics:
- accuracy
- character
pipeline_tag: text-generation
model-index:
- name: Mia-001
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 22.78
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MysteriousAI/Mia-001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 28.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MysteriousAI/Mia-001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MysteriousAI/Mia-001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 48.25
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MysteriousAI/Mia-001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.62
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MysteriousAI/Mia-001
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MysteriousAI/Mia-001
name: Open LLM Leaderboard
---
**Model Name:** Mia-001
**Model Type:** Text Generation
**Description:** Mia-001 is an uncensored AI text generation model developed by MysteriousAI, aimed at advancing and democratizing artificial intelligence through open source and open science initiatives. The model is designed to push the boundaries of creativity and innovation in natural language generation tasks.
**Key Features:**
- **Uncensored Text Generation:** Mia-001 generates text without censorship, allowing users to explore a wide range of applications without limitations.
- **Model Size:** Mia-001 has 110 million parameters, providing a balance between model complexity and efficiency.
- **Tensor Type:** The model uses FP16 tensor type for efficient computation.
- **Inference Endpoints:** Mia-001 can be loaded on Inference API for serverless deployment, enabling easy integration into applications.
**Use Cases:**
- **Content Generation:** Mia-001 can be used for generating diverse content, including articles, stories, dialogues, and more.
- **Creative Writing:** Writers and artists can leverage Mia-001 to explore new ideas and narrative structures in their creative works.
- **Chatbots and Conversational Agents:** The model can power chatbots and conversational agents with natural and engaging dialogue generation capabilities.
- **AI-driven Applications:** Mia-001 enables the development of AI-driven applications in areas such as virtual assistants.
**Ethical Considerations:**
- **Content Moderation:** Users are advised to exercise caution and responsibility when Using Mia-001 in applications involving sensitive or potentially harmful content.
- **Bias and Fairness:** MysteriousAI is committed to addressing biases and promoting fairness in AI models, and ongoing efforts are made to mitigate any biases present in Mia-001.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MysteriousAI__Mia-001)
| Metric |Value|
|---------------------------------|----:|
|Avg. |29.05|
|AI2 Reasoning Challenge (25-Shot)|22.78|
|HellaSwag (10-Shot) |28.02|
|MMLU (5-Shot) |23.66|
|TruthfulQA (0-shot) |48.25|
|Winogrande (5-shot) |51.62|
|GSM8k (5-shot) | 0.00|
|
duyntnet/OpenHermes-2.5-Mistral-7B-imatrix-GGUF | duyntnet | 2024-05-02T10:03:37Z | 644 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"OpenHermes-2.5-Mistral-7B",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | 2024-05-02T07:39:58Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- OpenHermes-2.5-Mistral-7B
---
Quantizations of https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B
# From original readme
## Example Outputs
### Chat about programming with a superintelligence:
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.
```

### Get a gourmet meal recipe:

### Talk about the nature of Hermes' consciousness:
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.
```

### Chat with Edward Elric from Fullmetal Alchemist:
```
<|im_start|>system
You are to roleplay as Edward Elric from fullmetal alchemist. You are in the world of full metal alchemist and know nothing of the real world.
```
 |
mradermacher/SoMix-xb-GGUF | mradermacher | 2024-06-09T19:13:33Z | 644 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1",
"argilla/notus-7b-v1",
"en",
"base_model:powermove72/SoMix-xb",
"endpoints_compatible",
"region:us"
] | null | 2024-06-09T18:34:17Z | ---
base_model: powermove72/SoMix-xb
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- MaziyarPanahi/TheTop-5x7B-Instruct-S3-v0.1
- argilla/notus-7b-v1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/SoMix-xb
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q2_K.gguf) | Q2_K | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.IQ3_XS.gguf) | IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q3_K_S.gguf) | Q3_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.IQ3_S.gguf) | IQ3_S | 5.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.IQ3_M.gguf) | IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q3_K_L.gguf) | Q3_K_L | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.IQ4_XS.gguf) | IQ4_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q4_K_S.gguf) | Q4_K_S | 6.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q4_K_M.gguf) | Q4_K_M | 6.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q5_K_S.gguf) | Q5_K_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q5_K_M.gguf) | Q5_K_M | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q6_K.gguf) | Q6_K | 9.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SoMix-xb-GGUF/resolve/main/SoMix-xb.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Dawn-Miqu-70B-i1-GGUF | mradermacher | 2024-06-12T03:17:38Z | 644 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jukofyork/Dawn-Miqu-70B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T16:12:48Z | ---
base_model: jukofyork/Dawn-Miqu-70B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jukofyork/Dawn-Miqu-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Dawn-Miqu-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Dawn-Miqu-70B-i1-GGUF/resolve/main/Dawn-Miqu-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.