modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
dahara1/ELYZA-japanese-Llama-2-7b-fast-instruct-GPTQ | dahara1 | "2023-11-13T15:10:58Z" | 1,084 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ja",
"en",
"arxiv:2307.09288",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-30T09:18:50Z" | ---
inference: false
license: llama2
language:
- ja
- en
---
# Model Card for Model ID
Original model [elyza/ELYZA-japanese-Llama-2-7b-fast-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-fast-instruct) which is based on Meta's "Llama 2" and has undergone additional pre-training in Japanese, and thier original post-training and speed up tuning.
This model is a quantized(miniaturized to 4.11GB) version of the original model(13.69GB).
## Model Details
Quantization reduces the amount of memory required and improves execution speed, but unfortunately performance deteriorates.
In particular, the original model is tuned for the purpose of strengthening the ability to follow Japanese instructions, not as a benchmark.
Although the ability to follow instructions cannot be measured using existing automated benchmarks, we have confirmed that quantized model significantly deteriorates the ability to follow instructions.
At least one GPU is currently required due to a limitation of the Accelerate library.
So this model cannot be run with the huggingface space free version.
You need [autoGPTQ](https://github.com/PanQiWei/AutoGPTQ) library to use this model.
## Other Quantized Model
### New!
[dahara1/ELYZA-japanese-Llama-2-7b-instruct-AWQ](https://huggingface.co/dahara1/ELYZA-japanese-Llama-2-7b-instruct-AWQ) is newly published.
The awq model has improved ability to follow instructions, so please try it.
There are another two [llama.cpp](https://github.com/ggerganov/llama.cpp) version quantized model.
If you want to run it in a CPU-only environment, you may want to check this.
(1)[mmnga's gguf version](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-fast-instruct-gguf)
(2)[opparco's gguf version](https://huggingface.co/opparco/ELYZA-japanese-Llama-2-7b-fast-instruct-gguf)
### Japanese automated benchmark result
Benchmark settings are the same as [weblab-10b-instruction-sft-GPTQ](https://huggingface.co/dahara1/weblab-10b-instruction-sft-GPTQ)
| Task |Version| Metric |Value | |Stderr|
|----------------------|------:|--------|-----:|---|-----:|
|jcommonsenseqa-1.1-0.3| 1.1|acc |0.7417|± |0.0131|
| | |acc_norm|0.3485|± |0.0143|
| Task |Version| Metric | Value | |Stderr|
|--------------|------:|-----------|------:|---|------|
|jsquad-1.1-0.3| 1.1|exact_match|69.0455| | |
| | |f1 |80.2155| | |
### Sample Code
```
pip install auto-gptq
```
```
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
quantized_model_dir = "dahara1/ELYZA-japanese-Llama-2-7b-fast-instruct-GPTQ"
model_basename = "gptq_model-4bit-128g"
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
model = AutoGPTQForCausalLM.from_quantized(
quantized_model_dir,
model_basename=model_basename,
use_safetensors=True,
disable_exllama=False,
inject_fused_attention=False,
device="cuda:0")
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。"
text = "クマが海辺に行ってアザラシと友達になり、最終的には家に帰るというプロットの短編小説を書いてください。"
prompt = "{bos_token}{b_inst} {system}{prompt} {e_inst} ".format(
bos_token=tokenizer.bos_token,
b_inst=B_INST,
system=f"{B_SYS}{DEFAULT_SYSTEM_PROMPT}{E_SYS}",
prompt=text,
e_inst=E_INST,
)
tokens = tokenizer(prompt, return_tensors="pt").to("cuda:0").input_ids
output = model.generate(
input_ids=tokens,
max_new_tokens=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(output[0]))
```
result.
```
<s><s> [INST] <<SYS>>
あなたは誠実で優秀な日本人のアシスタントです。
<</SYS>>
クマが海辺に行ってアザラシと友達になり、最終的には家に帰るというプロットの短編小説を書いてください。 [/INST] クマは海辺にやってきました。
海辺はとてもきれいでした。
クマは海辺に座って、海を眺めていました。
すると、海辺にアザラシがやってきました。
アザラシはクマに話しかけました。
「どうしてここにいるの?」「私は海辺に座って海を眺めていました。
すると、アザラシがやってきました。
アザラシはクマに話しかけました。
「どうしてここにいるの?」「私は海辺に座って海を眺めていました。
すると、アザラシがやってきました。
アザラシはクマに話しかけました。
「どうしてここにいるの?」「私は海辺に座って海を眺めていました。
すると、アザラシがやってきました。
アザラシはクマに話しかけました。
「どうしてここにいるの?」「私は海辺に座って海を眺めていました。
すると、アザラシがやってきました。
アザラシはクマに話しかけました。
「どうしてここに
```
### Colab Sample with extra benchmark
[Colab Sample with extra benchmark](https://github.com/webbigdata-jp/python_sample/blob/main/ELYZA_japanese_Llama_2_7b_fast_instruct_GPTQ_sample.ipynb)
elyza_tasks_100_over_4score_prompt borrows data from [ELYZA-tasks-100 評価結果シート](https://docs.google.com/spreadsheets/d/1mtoy4QAqDPk2f_B0vDogFoOrbA5G42DBEEHdqM4VmDI/edit#gid=1023787356).
The original model was able to perform well at these prompts but this model does not always give good results.
So if you need high performance, please use the original model.
### Citations
This model is based on the work of the following people:
```tex
@misc{elyzallama2023,
title={ELYZA-japanese-Llama-2-7b},
url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b},
author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura},
year={2023},
}
```
```tex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### about this work
- **This Quantization work was done by :** [webbigdata](https://webbigdata.jp/) |
Setiaku/p2tr1-13b-l2-fp16 | Setiaku | "2023-10-23T16:28:17Z" | 1,084 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-22T13:35:21Z" | ---
license: llama2
language:
- en
---
<br>
<br>
<br>
<br>
I picked random models and merged them together :))))))
<br>I think they were leaderboard models maybe?????
<br>Should be. :>
<br>Kinda dumb hahahaha
<br>experimental
<br>
<br>latnemirepxe
-putri |
NobodyExistsOnTheInternet/Yi-34b-qlora-e1 | NobodyExistsOnTheInternet | "2023-11-07T07:51:12Z" | 1,084 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"dataset:NobodyExistsOnTheInternet/GiftedConvoBeforeEcons",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2023-11-07T07:30:50Z" | ---
license: mit
datasets:
- NobodyExistsOnTheInternet/GiftedConvoBeforeEcons
---
This is Yi-34b adaptor tuned on a subset of PRM800k, CamelAI physics and chemistry, and convoEvolLima
This model is the first epoch of many.
Check the main branch here: https://huggingface.co/NobodyExistsOnTheInternet/Yi-34B-GiftedConvo |
TheBloke/Loyal-Macaroni-Maid-7B-GGUF | TheBloke | "2023-12-24T19:43:25Z" | 1,084 | 25 | transformers | [
"transformers",
"gguf",
"mistral",
"merge",
"not-for-all-audiences",
"nsfw",
"base_model:SanjiWatsuki/Loyal-Macaroni-Maid-7B",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
] | null | "2023-12-24T19:39:00Z" | ---
base_model: SanjiWatsuki/Loyal-Macaroni-Maid-7B
inference: false
license: cc-by-nc-4.0
model_creator: Sanji Watsuki
model_name: Loyal Macaroni Maid 7B
model_type: mistral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- merge
- not-for-all-audiences
- nsfw
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Loyal Macaroni Maid 7B - GGUF
- Model creator: [Sanji Watsuki](https://huggingface.co/SanjiWatsuki)
- Original model: [Loyal Macaroni Maid 7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Sanji Watsuki's Loyal Macaroni Maid 7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF)
* [Sanji Watsuki's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [loyal-macaroni-maid-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF/blob/main/loyal-macaroni-maid-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [loyal-macaroni-maid-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF/blob/main/loyal-macaroni-maid-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| 5.67 GB | very small, high quality loss |
| [loyal-macaroni-maid-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF/blob/main/loyal-macaroni-maid-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [loyal-macaroni-maid-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF/blob/main/loyal-macaroni-maid-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [loyal-macaroni-maid-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF/blob/main/loyal-macaroni-maid-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [loyal-macaroni-maid-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF/blob/main/loyal-macaroni-maid-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [loyal-macaroni-maid-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF/blob/main/loyal-macaroni-maid-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [loyal-macaroni-maid-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF/blob/main/loyal-macaroni-maid-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [loyal-macaroni-maid-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF/blob/main/loyal-macaroni-maid-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [loyal-macaroni-maid-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF/blob/main/loyal-macaroni-maid-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [loyal-macaroni-maid-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF/blob/main/loyal-macaroni-maid-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [loyal-macaroni-maid-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Loyal-Macaroni-Maid-7B-GGUF/blob/main/loyal-macaroni-maid-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Loyal-Macaroni-Maid-7B-GGUF and below it, a specific filename to download, such as: loyal-macaroni-maid-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Loyal-Macaroni-Maid-7B-GGUF loyal-macaroni-maid-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Loyal-Macaroni-Maid-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Loyal-Macaroni-Maid-7B-GGUF loyal-macaroni-maid-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m loyal-macaroni-maid-7b.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./loyal-macaroni-maid-7b.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./loyal-macaroni-maid-7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Sanji Watsuki's Loyal Macaroni Maid 7B

<!-- description start -->
## Description
This repository hosts quantized GGUF files for **Loyal-Macaroni-Maid-7B**, a 7B model aimed at having engaging RP with solid character card adherence and being a smart cookie at the same time.
In my limited testing, it's a great RP model suitable for RP/ERP with sharp reasoning skills for a 7B. I expect it to both benchmark well and be a very suitable model for general use.
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Custom format, or Alpaca
### Custom format:
I found the best SillyTavern results from using the Noromaid template.
SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json). Additionally, here is my [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json)
Otherwise, I tried to ensure that most of the underlying merged models were Alpaca-ish.
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Helpful Tips
For SFW RP, I found that I got the most use out of this model when I had an RPG Narrator in a group chat with the characters I wanted to RP with. Here is an importable character card for the best RPG Narrator I found thus far.

For basic ChatGPT tasks, here is the basic Assistant card that I use. I found it works best with Default context template / Alpaca instruct template in Silly Tavern.

## Frankenstein's Merger
**tl;dr: This is a bunch of model merger slop with a bunch of RP cherries on top.**
I'll keep it a buck - I'm not a fan of this model's composition. Based on my testing, it seemed like models that were built from a merger of OpenChat-3.5/Starling and NeuralChat v3.1 had surprisingly good character card coherence for a 7B model - better than either one in isolation. This is validated both in my personal benchmarks as well as the [Ayumi NSFW ERP ALC-IQ3 metric](http://ayumi.m8geil.de/ayumi_bench_v3_results.html) which rates character card coherence and is dominated by OpenNeuralChat mergers for small models.
The issue is... prompt format.
OpenChat-3.5 uses an abomination of a prompt format with "GPT4 Correct User/Assistant" all over it in a ChatML-style prompt with extra tokens for padding and end-of-turn. NeuralChat v3.1 uses a weird Alpaca-like format with "### System/User/Assistant" all over it. Almost every RP finetune standardized on Alpaca or an expanded Alpaca with janky multi-turn prompting (since Alpaca doesn't have multi-turn prompting).
Most model mergers like [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) just slam them together and toss the extra ChatML tokens, resulting in a half-Alpaca-like half-ChatML-like Frankenstein's monster. For the most part, using Alpaca as the lingua franca just kinda works but [there are exceptions that can make a generation go off the rails](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3/discussions/6). I found this to be a bit of an issue in certain SillyTavern test cases.
Regardless, the strong Chat Arena performances from 7B models continues to lead me to believe they're the strongest base for an all-purpose model.
### The sauce (All You Need is DARE)
**tl;dr: It's an OpenChat/NeuralChat merger with as much RP as possible stuffed in using the DARE TIES merger method.**
This model is a DARE TIES merger between Toten5/Marcoroni-neural-chat-7B-v2, chargoddard/loyal-piano-m7, Undi95/Toppy-M-7B, NeverSleep/Noromaid-7b-v0.2, and athirdpath/NSFW_DPO_vmgb-7b on top of a mistralai/Mistral-7B-v0.1 base.
```
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: Toten5/Marcoroni-neural-chat-7B-v2
parameters:
weight: 0.3
density: 0.8
- model: chargoddard/loyal-piano-m7
parameters:
weight: 0.4
density: 0.8
- model: Undi95/Toppy-M-7B
parameters:
weight: 0.2
density: 0.4
- model: NeverSleep/Noromaid-7b-v0.2
parameters:
weight: 0.2
density: 0.4
- model: athirdpath/NSFW_DPO_vmgb-7b
parameters:
weight: 0.2
density: 0.4
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
There's a lot to unpack here. I went with DARE TIES because it appeared to be a viable way to combine information into models without losing smarts. Directly SLERPing a smart cookie model with an ERP brained model will often dilute both the model's smarts and RPing ability. This is an attempt to have my cookie and eat it, too.
First, there are two high density high weight models:
[chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7) is the easy primary model choice. It's an Alpaca prompt format model that scores highly, is very creative for a 7B, and is primarily trained on RP data.
[Toten5/Marcoroni-neural-chat-7B-v2](https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v2) is the unintuitive second model pick. It is a merger of mergers that chains back to being an OpenChat/NeuralChat merger being SLERPed back into NeuralChat a second time. Despite SLERPing NeuralChat in multiple times, it retains its high benchmark scores. I opted to pick this model as my base because I believed it was the OpenChat/NeuralChat model that benchmarked well that was closest to the O.G. NeuralChat which has the most Alpaca-like prompt.
By picking a density of 0.8, these models have a 96% chance of showing up for any TIE merger. This should ensure that there is a solid "base" of deltas from the base Mistral model that captures most of what makes these models good. High density with 0.3-0.4 weights have been shown in mergers like [jan-hq/supermario-v2](https://huggingface.co/jan-hq/supermario-v2)
Next, there are 3 RP models merged in with medium density. [Undi95/Toppy-M-7B](https://huggingface.co/Undi95/Toppy-M-7B), [NeverSleep/Noromaid-7b-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2), and [athirdpath/NSFW_DPO_vmgb-7b](https://huggingface.co/athirdpath/NSFW_DPO_vmgb-7b). Toppy-M-7B is an easy pick for being a well regarded 7B RP model - although, it is a merger of many mergers which might dilute its effectiveness as a lower density merge. NeverSleep/Noromaid-7b-v0.2 pulls in the unique private Noromaid RP dataset. Finally, athirdpath/NSFW_DPO_vmgb-7b is another Frankenstein OpenNeuralChat merger that happens to be DPOed on athirdpath's NSFW Alpaca pairs which seemed like another good RP addition to the model (plus, maybe it tilts it to being more Alpaca-flavored, idk).
By picking a density of 0.4, these models should *largely* impart some of their flavor onto the merger. I suspect the density could go even lower and the models could be used even more like a LoRA-like merger on top.
The DARE TIES merger is intentionally overweight and non-normalized at 1.3 total weight. I intentionally went overweight to try and better capture the individual characteristics from the various models. With wide mergers, a weight of 1.0 can often become incoherent like [jan-hq/supermario-v1](https://huggingface.co/jan-hq/supermario-v1).
Putting it all together, ~60% of the model is "base models" like OpenChat/NeuralChat/Loyal-Piano-M7. ~40% of the model is effectively me trying to extract RP information from existing RP models. The only non-RP model is the Marcoroni base which means that almost 80% of this model is intended for RP.
Not that the benchmarks matter, but if this merger works right, it'll be a high benchmarking 7B that is both smart and strong at RP.
<!-- original-model-card end -->
|
HwiyeolJo/testttt | HwiyeolJo | "2024-01-03T17:13:42Z" | 1,084 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-03T17:07:12Z" | Entry not found |
oneonlee/KoSOLAR-v0.2-gugutypus-10.7B | oneonlee | "2024-03-02T13:39:12Z" | 1,084 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"ko",
"dataset:kyujinpy/KOR-gugugu-platypus-set",
"base_model:yanolja/KoSOLAR-10.7B-v0.2",
"doi:10.57967/hf/1735",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-23T08:27:59Z" | ---
language:
- en
- ko
license: cc-by-nc-4.0
datasets:
- kyujinpy/KOR-gugugu-platypus-set
base_model:
- yanolja/KoSOLAR-10.7B-v0.2
pipeline_tag: text-generation
---
<div align="center">
<h1>🤗 KoSOLAR-v0.2-gugutypus-10.7B ☀️</h1>
<a style="margin: 0px;" href="https://github.com/oneonlee/KoSOLAR-v0.2-gugutypus-10.7B"><img style="margin: 0.5em;" alt="GitHub" src="https://img.shields.io/badge/GitHub-181717.svg?style=flat&logo=GitHub"></a>
<a style="margin: 0px;" href="https://huggingface.co/oneonlee/KoSOLAR-v0.2-gugutypus-10.7B"><img style="margin: 0.5em;" alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97-Models%20on%20Hub-yellow"></a>
<a style="margin: 0px;" href="https://github.com/oneonlee/KoSOLAR-v0.2-gugutypus-10.7B/blob/main/LICENSE"><img style="margin: 0.5em;" alt="License: CC BY-NC 4.0" src="https://img.shields.io/badge/License-CC%20BY%2D%2DNC%204.0-blue.svg"></a>
<a style="margin: 0px;" href="https://doi.org/10.57967/hf/1735"><img style="margin: 0.5em;" alt="DOI" src="https://img.shields.io/badge/DOI-10.57967%2Fhf%2F1735-blue"></a>
<img src="logo.png" height=350, width=350>
</div>
---
## Model Details
**Model Developers**
- DongGeon Lee ([oneonlee](https://huggingface.co/oneonlee))
**Model Architecture**
- **KoSOLAR-v0.2-gugutypus-10.7B** is a instruction fine-tuned auto-regressive language model, based on the [SOLAR](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) transformer architecture.
**Base Model**
- [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2)
**Training Dataset**
- [kyujinpy/KOR-gugugu-platypus-set](https://huggingface.co/datasets/kyujinpy/KOR-gugugu-platypus-set)
---
## Model comparisons
- **Ko-LLM leaderboard (2024/03/01)** [[link]](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)
| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ----------------------------------------- | ----------- | ------ | ------------ | ------- | ------------- | --------------- |
| **oneonlee/KoSOLAR-v0.2-gugutypus-10.7B** | **51.17** | 47.78 | 58.29 | 47.27 | 48.31 | 54.19 |
| [oneonlee/LDCC-SOLAR-gugutypus-10.7B](https://huggingface.co/oneonlee/LDCC-SOLAR-gugutypus-10.7B) | 49.45 | 45.9 | 55.46 | 47.96 | 48.93 | 49 |
<br>
- **(KOR) AI-Harness evaluation** [[link]](https://github.com/Beomi/ko-lm-evaluation-harness)
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|-------------------------|-------|------|-----:|------|-----:|---|-----:|
|KMMLU |N/A |none | 0|acc |0.3335|± |0.0475|
|KMMLU |N/A |none | 5|acc |0.3938|± |0.0823|
|KoBEST-HellaSwag | 0|none | 0|acc |0.4360|± |0.0222|
|KoBEST-HellaSwag | 0|none | 5|acc |0.4420|± |0.0222|
|KoBEST-BoolQ | 0|none | 0|acc |0.5064|± |0.0133|
|KoBEST-BoolQ | 0|none | 5|acc |0.8583|± |0.0093|
|KoBEST-COPA | 0|none | 0|acc |0.6040|± |0.0155|
|KoBEST-COPA | 0|none | 5|acc |0.7610|± |0.0135|
|KoBEST-SentiNeg | 0|none | 0|acc |0.5844|± |0.0248|
|KoBEST-SentiNeg | 0|none | 5|acc |0.9471|± |0.0112|
<br>
- **(ENG) AI-Harness evaluation** [[link]](https://github.com/EleutherAI/lm-evaluation-harness)
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|MMLU |N/A |none | 0|acc |0.5826|± |0.1432|
|MMLU |N/A |none | 5|acc |0.5885|± |0.1285|
|HellaSwag | 1|none | 0|acc |0.6075|± |0.0049|
|HellaSwag | 1|none | 5|acc |0.6098|± |0.0049|
|BoolQ | 2|none | 0|acc |0.8737|± |0.0058|
|BoolQ | 2|none | 5|acc |0.8826|± |0.0056|
|COPA | 1|none | 0|acc |0.8300|± |0.0378|
|COPA | 1|none | 5|acc |0.9100|± |0.0288|
|truthfulqa |N/A |none | 0|acc |0.4249|± |0.0023|
|truthfulqa |N/A |none | 5|acc | - |± | - |
---
## How to Use
```python
### KoSOLAR-gugutypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "oneonlee/KoSOLAR-v0.2-gugutypus-10.7B"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
## Citation
```
@misc {donggeon_lee_2024,
author = { {DongGeon Lee} },
title = { KoSOLAR-v0.2-gugutypus-10.7B (Revision 56841d5) },
year = 2024,
url = { https://huggingface.co/oneonlee/KoSOLAR-v0.2-gugutypus-10.7B },
doi = { 10.57967/hf/1735 },
publisher = { Hugging Face }
}
```
---
## References
- [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2)
- [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
- [kyujinpy/KOR-gugugu-platypus-set](https://huggingface.co/datasets/kyujinpy/KOR-gugugu-platypus-set)
- [squarelike/OpenOrca-gugugo-ko](https://huggingface.co/datasets/squarelike/OpenOrca-gugugo-ko)
- [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3)
- [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
- [upstage/open-ko-llm-leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)
- [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
- [Beomi/ko-lm-evaluation-harness](https://github.com/Beomi/ko-lm-evaluation-harness) |
rrw-x2/KoSOLAR-10.9B-v0.5 | rrw-x2 | "2024-02-09T13:43:41Z" | 1,084 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"LDCC/LDCC-SOLAR-10.7B",
"hyeogi/SOLAR-10.7B-dpo-v1",
"ko",
"base_model:LDCC/LDCC-SOLAR-10.7B",
"base_model:hyeogi/SOLAR-10.7B-dpo-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-09T13:05:53Z" | ---
language:
- ko
base_model:
- LDCC/LDCC-SOLAR-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
tags:
- mergekit
- merge
- LDCC/LDCC-SOLAR-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [hyeogi/SOLAR-10.7B-dpo-v1](https://huggingface.co/hyeogi/SOLAR-10.7B-dpo-v1)
* [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: LDCC/LDCC-SOLAR-10.7B
layer_range: [0, 48]
- model: hyeogi/SOLAR-10.7B-dpo-v1
layer_range: [0, 48]
merge_method: slerp
tokenizer_source: base
base_model: LDCC/LDCC-SOLAR-10.7B
embed_slerp: true
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## Datasets
Finetuned using LoRA with [kyujinpy/OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO) |
ChrisWilson011016/5DJnnhpSPpPv2m5K7GP4ytBSpf14JdjoKV3s6wEp7t9kZcSF_vgg | ChrisWilson011016 | "2024-03-04T18:58:20Z" | 1,084 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-02-24T15:26:18Z" | Entry not found |
rrw-x2/KoSOLAR-10.7B-v1.0 | rrw-x2 | "2024-04-02T08:57:21Z" | 1,084 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-25T02:46:52Z" | ---
library_name: transformers
tags:
- merge
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
microsoft/deberta-v2-xxlarge-mnli | microsoft | "2021-05-21T20:08:40Z" | 1,083 | 7 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"deberta",
"deberta-mnli",
"en",
"arxiv:2006.03654",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
language: en
tags:
- deberta
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
widget:
- text: "[CLS] I love you. [SEP] I like you. [SEP]"
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This the DeBERTa V2 XXLarge model fine-tuned with MNLI task, 48 layers, 1536 hidden size. Total parameters 1.5B.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=rte
output_dir="ds_results"
num_gpus=8
batch_size=4
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge-mnli \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=rte
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge-mnli \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 4 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
LLMs/Vicuna-EvolInstruct-13B | LLMs | "2023-05-06T13:30:21Z" | 1,083 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-06T12:54:45Z" | Entry not found |
jondurbin/airoboros-m-7b-3.0 | jondurbin | "2023-10-05T20:46:37Z" | 1,083 | 8 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:jondurbin/airoboros-3.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-03T09:39:19Z" | ---
license: apache-2.0
datasets:
- jondurbin/airoboros-3.0
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
The airobors 3.0 models have been updated to use llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3.0), which is a continuation of [airoboros-2.2.1 dataset](https://hf.co/datasets/jondurbin/airoboros-2.2.1) with the following extras:
* MathJSON - this is not yet part of the code directly, but can be easily parsed and executed, as a way to perform deterministic calculations. LLMs suck at math, so this does problem formulation and let's the work be done by traditional operations.
* Human-generated RP multi-turn conversation - provided by an anon, this is a human created RP dataset, meant to assist with multi-turn coherency and chat experience.
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
Here is a simple python script that can help generate the prompt:
```python
def get_chat_prompt(message, chat_history, system_prompt):
texts = [f'[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n']
do_strip = False
for user_input, response in chat_history:
user_input = user_input.strip() if do_strip else user_input
do_strip = True
texts.append(f'{user_input} [/INST] {response.strip()} </s><s>[INST] ')
message = message.strip() if do_strip else message
texts.append(f'{message} [/INST] ')
return ''.join(texts)
def get_prompt(item):
"""
`item` is expected to be in chatml format, e.g.
{
"id": 1,
"conversations": [
{
"role": "system",
"content": "You are a helpful, unbiased, uncensored assistant."
},
{
"role": "user",
"content": "Hi!"
},
{
"role": "assistant",
"content": "Hello, how are you today?"
},
{
"role": "user",
"content": "Just peachy, and you?"
}
]
}
"""
system_prompt = share_gpt_item['conversations'][0]['value']
offset = 1
if share_gpt_item['conversations'][0]['role'] != 'system':
system_prompt = 'You are an unbiased, uncensored, helpful assistant.'
offset = 0
chat_history = [
(share_gpt_item['conversations'][idx]['content'], share_gpt_item['conversations'][idx + 1]['content'])
for idx in range(offset, len(item['conversations']) - 1, 2)
]
message = item['conversations'][-1]['content']
return get_chat_prompt(message, chat_history, system_prompt)
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.0 models are built on top of multiple base models, each with their own license/restrictions.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The model with `-3b` uses Stability AI, which as a `cc-by-sa-4.0` license.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me. |
CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o_gate_up_down | CHIH-HUNG | "2023-10-05T00:59:56Z" | 1,083 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-05T00:38:25Z" | Entry not found |
Doctor-Shotgun/limarpv3-llama2-70b-qlora | Doctor-Shotgun | "2023-10-25T19:15:49Z" | 1,083 | 3 | transformers | [
"transformers",
"llama",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2023-10-11T06:56:08Z" | ---
tags:
- generated_from_trainer
model-index:
- name: limarpv3-llama2-70b-qlora
results: []
license: apache-2.0
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# limarpv3-llama2-70b-qlora
This model is an unofficial Llama 2 70B training on the LimaRP v3 dataset by [lemonilia](https://huggingface.co/lemonilia). It does not include the pretraining stage using stories.
It achieves the following results on the evaluation set:
- Loss: 1.8232
## Model description
For more details about LimaRP, see the model page for the [previously released v2 version for Llama-2](https://huggingface.co/lemonilia/limarp-llama2-v2). Most details written there apply for this version as well. Generally speaking, LimaRP is a longform-oriented, novel-style roleplaying chat model intended to replicate the experience of 1-on-1 roleplay on Internet forums. Short-form, IRC/Discord-style RP (aka "Markdown format") is not supported yet. The model does not include instruction tuning, only manually picked and slightly edited RP conversations with persona and scenario data.
Prompt format is the [extended Alpaca format](https://github.com/tatsu-lab/stanford_alpaca):
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
Inspired by the previously named "Roleplay" preset in SillyTavern, with this version of LimaRP it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The lengths using during training are:
`micro`, `tiny`, `short`, `medium`, `long`, `massive`, `huge`, `enormous`, `humongous`, `unlimited`.
**The recommended starting length is medium**. Keep in mind that the AI can ramble or impersonate
the user with very long messages.
The length control effect is reproducible, but the messages will not necessarily follow
lengths very precisely, rather follow certain ranges on average, as seen in this table
with data from tests made with one reply at the beginning of the conversation:

Response length control appears to work well also deep into the conversation. **By omitting
the modifier, the model will choose the most appropriate response length** (although it might
not necessarily be what the user desires).
## Intended uses & limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model.
## Training and evaluation data
For more details about LimaRP, see the model page for the [previously released v2 version for Llama-2](https://huggingface.co/lemonilia/limarp-llama2-v2).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8482 | 0.09 | 20 | 1.8569 |
| 1.6823 | 0.18 | 40 | 1.8400 |
| 1.779 | 0.27 | 60 | 1.8329 |
| 1.7776 | 0.36 | 80 | 1.8287 |
| 1.7773 | 0.45 | 100 | 1.8280 |
| 1.7328 | 0.53 | 120 | 1.8273 |
| 1.7349 | 0.62 | 140 | 1.8243 |
| 1.7789 | 0.71 | 160 | 1.8228 |
| 1.8113 | 0.8 | 180 | 1.8215 |
| 1.7 | 0.89 | 200 | 1.8203 |
| 1.7279 | 0.98 | 220 | 1.8201 |
| 1.7605 | 1.07 | 240 | 1.8225 |
| 1.7492 | 1.16 | 260 | 1.8245 |
| 1.7823 | 1.25 | 280 | 1.8235 |
| 1.6247 | 1.34 | 300 | 1.8247 |
| 1.6858 | 1.43 | 320 | 1.8246 |
| 1.6561 | 1.51 | 340 | 1.8240 |
| 1.7093 | 1.6 | 360 | 1.8240 |
| 1.6844 | 1.69 | 380 | 1.8235 |
| 1.6608 | 1.78 | 400 | 1.8233 |
| 1.7686 | 1.87 | 420 | 1.8233 |
| 1.7189 | 1.96 | 440 | 1.8232 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1 |
Danielbrdz/CodeBarcenas-1b | Danielbrdz | "2023-11-10T16:07:18Z" | 1,083 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-10T15:42:39Z" | ---
license: llama2
---
CodeBarcenas
Model specialized in the Python language
Based on the model: WizardLM/WizardCoder-1B-V1.0
And trained with the dataset: mlabonne/Evol-Instruct-Python-1k
Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽 |
Doctor-Shotgun/airoboros-2.2.1-y34b | Doctor-Shotgun | "2023-11-11T16:07:44Z" | 1,083 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Yi",
"llama 2",
"en",
"dataset:jondurbin/airoboros-2.2.1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-11T00:40:57Z" | ---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- Yi
- llama
- llama 2
license: other
license_name: yi-license
license_link: LICENSE
datasets:
- jondurbin/airoboros-2.2.1
---
# airoboros-2.2.1-y34b
Unofficial training of [Jon Durbin](https://huggingface.co/jondurbin)'s powerful airoboros 2.2.1 dataset on [Charles Goddard](https://huggingface.co/chargoddard)'s [Llama-fied Yi 34B model](https://huggingface.co/chargoddard/Yi-34B-Llama), aiming to bring the instruction-following capabilities of the airoboros dataset to the new Yi 34B foundational model.
As a 34B model with grouped-query attention, users will be able to conduct inference on the model with 4bit quantization on a single 24gb consumer GPU.
This Yi model is "Llama-fied", meaning the keys are renamed to match those used in Llama models, eliminating the need for remote code and ensuring compatibility with existing training and inference repositories. Architecturally this is similar to a Llama 2 34B model with an expanded vocab size of 64000.
This model is retrained thanks to compute provided by [alpin](https://huggingface.co/alpindale) with a monkeypatch to the trainer to resolve EOS token issues in the prompter. A smaller batch size and learning rate were used and training was extended by one epoch. 8-bit lora was also used instead of qlora.
## EXL2 Quants Provided by LoneStriker:
[3.0bpw](https://huggingface.co/LoneStriker/airoboros-2.2.1-y34b-3.0bpw-h6-exl2), [4.0bpw](https://huggingface.co/LoneStriker/airoboros-2.2.1-y34b-4.0bpw-h6-exl2), [5.0bpw](https://huggingface.co/LoneStriker/airoboros-2.2.1-y34b-5.0bpw-h6-exl2), [6.0bpw](https://huggingface.co/LoneStriker/airoboros-2.2.1-y34b-6.0bpw-h6-exl2), [8.0bpw](https://huggingface.co/LoneStriker/airoboros-2.2.1-y34b-8.0bpw-h8-exl2)
## Usage:
The intended prompt format is the modified Vicuna 1.1 instruction format used by airoboros v2:
```
A chat.
USER: {prompt}
ASSISTANT:
```
## Training Details:
The model was trained using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) as a lora adapter on 1x A100 80gb GPU for 4 epochs, before being fused to the base model with PEFT.
## License:
This model is built on the Yi 34B base model, which has its own custom license included in this repository.
Please refer to the [airoboros 2.2.1 dataset card](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1) regarding the usage of gpt-4 API calls in creating the dataset. |
Systran/faster-whisper-large-v1 | Systran | "2023-11-23T11:35:58Z" | 1,083 | 1 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] | automatic-speech-recognition | "2023-11-23T09:51:16Z" | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper large-v1 model for CTranslate2
This repository contains the conversion of [openai/whisper-large](https://huggingface.co/openai/whisper-large) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("large-v1")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-large --output_dir faster-whisper-large-v1 \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large).**
|
peterkang/mymodel_v3 | peterkang | "2024-01-03T18:32:09Z" | 1,083 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-03T18:14:35Z" | Entry not found |
exala/db_mc_10.3 | exala | "2024-03-12T04:07:08Z" | 1,083 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-12T04:06:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lltutor/Llama-2-7b-chat-hf-network-test-fine-tuned-cpu | lltutor | "2024-06-29T10:21:50Z" | 1,083 | 0 | null | [
"gguf",
"license:llama2",
"region:us"
] | null | "2024-06-29T08:41:42Z" | ---
license: llama2
---
|
studio-ousia/mluke-base | studio-ousia | "2023-06-16T13:54:44Z" | 1,082 | 6 | transformers | [
"transformers",
"pytorch",
"luke",
"fill-mask",
"named entity recognition",
"relation classification",
"question answering",
"multilingual",
"ar",
"bn",
"de",
"el",
"en",
"es",
"fi",
"fr",
"hi",
"id",
"it",
"ja",
"ko",
"nl",
"pl",
"pt",
"ru",
"sv",
"sw",
"te",
"th",
"tr",
"vi",
"zh",
"arxiv:2010.01057",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language:
- multilingual
- ar
- bn
- de
- el
- en
- es
- fi
- fr
- hi
- id
- it
- ja
- ko
- nl
- pl
- pt
- ru
- sv
- sw
- te
- th
- tr
- vi
- zh
thumbnail: https://github.com/studio-ousia/luke/raw/master/resources/luke_logo.png
tags:
- luke
- named entity recognition
- relation classification
- question answering
license: apache-2.0
---
## mLUKE
**mLUKE** (multilingual LUKE) is a multilingual extension of LUKE.
Please check the [official repository](https://github.com/studio-ousia/luke) for
more details and updates.
This is the mLUKE base model with 12 hidden layers, 768 hidden size. The total number
of parameters in this model is 585M (278M for the word embeddings and encoder, 307M for the entity embeddings).
The model was initialized with the weights of XLM-RoBERTa(base) and trained using December 2020 version of Wikipedia in 24 languages.
## Note
When you load the model from `AutoModel.from_pretrained` with the default configuration, you will see the following warning:
```
Some weights of the model checkpoint at studio-ousia/mluke-base-lite were not used when initializing LukeModel: [
'luke.encoder.layer.0.attention.self.w2e_query.weight', 'luke.encoder.layer.0.attention.self.w2e_query.bias',
'luke.encoder.layer.0.attention.self.e2w_query.weight', 'luke.encoder.layer.0.attention.self.e2w_query.bias',
'luke.encoder.layer.0.attention.self.e2e_query.weight', 'luke.encoder.layer.0.attention.self.e2e_query.bias',
...]
```
These weights are the weights for entity-aware attention (as described in [the LUKE paper](https://arxiv.org/abs/2010.01057)).
This is expected because `use_entity_aware_attention` is set to `false` by default, but the pretrained weights contain the weights for it in case you enable `use_entity_aware_attention` and have the weights loaded into the model.
### Citation
If you find mLUKE useful for your work, please cite the following paper:
```latex
@inproceedings{ri-etal-2022-mluke,
title = "m{LUKE}: {T}he Power of Entity Representations in Multilingual Pretrained Language Models",
author = "Ri, Ryokan and
Yamada, Ikuya and
Tsuruoka, Yoshimasa",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
year = "2022",
url = "https://aclanthology.org/2022.acl-long.505",
```
|
tdunlap607/vfc-identification | tdunlap607 | "2023-06-14T20:25:10Z" | 1,082 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"custom_code",
"license:apache-2.0",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2023-06-14T20:18:48Z" | ---
license: apache-2.0
---
|
wei123602/llama2-13b-fintune2 | wei123602 | "2023-09-06T14:06:38Z" | 1,082 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:huangyt/FINETUNE2",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-04T12:05:40Z" | ---
license: llama2
datasets:
- huangyt/FINETUNE2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
在llama-2-13b上使用huangyt/FINETUNE2資料集進行訓練,採用與platypus相近參數
# Fine-Tuning Information
- **GPU:** RTX4090 (single core / 24564MiB)
- **base model:** meta-llama/Llama-2-13b-hf
- **dataset:** huangyt/FINETUNE2 (共約3w筆訓練集)
- **peft_type:** LoRA
- **lora_rank:** 16
- **lora alpha:** 8
- **lora dropout:** 0.05
- **lora_target:** gate_proj, up_proj, down_proj
- **per_device_train_batch_size:** 8
- **gradient_accumulation_steps:** 8
- **learning_rate :** 4e-4
- **epoch:** 1
- **batch size:** 8
- **microbatch size:** 4
- **warmup steps:** 100
- **weight decay:** cosine
- **cutoff length:** 2048
- **precision:** bf16
- **group_by_length:** True
- **load_in_8bit:** True
# Fine-Tuning Detail
- **train_loss:** 0.0823
- **train_runtime:** 02:40:01
# Evaluation
- 評估結果來自**HuggingFaceH4/open_llm_leaderboard**
- 與Llama-2-13b比較4種Benchmark,包含**ARC**、**HellaSwag**、**MMLU**、**TruthfulQA**
| Model |Average| ARC |HellaSwag| MMLU |TruthfulQA|
|--------------------------------------------------------|-------|-------|---------|-------|----------|
|meta-llama/Llama-2-13b-hf | 56.9 | 58.11 | 80.97 | 54.34 | 34.17 |
|meta-llama/Llama-2-13b-chat-hf | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 |
|CHIH-HUNG/llama-2-13b-FINETUNE2_3w | 58.24 | 58.62 | 82.32 | 54.25 | 38.17 |
|CHIH-HUNG/llama-2-13b-huangyt_Fintune_1_17w-q_k_v_o_proj| 58.49 | 59.73 | 81.06 | 54.53 | 38.64 |
|CHIH-HUNG/llama-2-13b-FINETUNE2_3w-q_k_v_o_proj | 58.21 | 58.53 | 82.47 | 53.9 | 37.92 |
|CHIH-HUNG/llama-2-13b-FINETUNE2_3w-gate_up_down_proj | 58.81 | 57.42 | 82.42 | 55.57 | 39.19 |
|wei123602/llama2-13b-fintune2 | wait | wait | wait | wait | wait |
|
IkariDev/Athena-v2 | IkariDev | "2023-09-25T13:11:14Z" | 1,082 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-23T14:03:35Z" | ---
license: cc-by-nc-4.0
---

Experimental Athena v2 model. Use Alpaca format.
<!-- description start -->
## Description
[Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json)
This repo contains fp16 files of Athena-V2.
[GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v2-GGUF)
[GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v2-GPTQ)
[exl2 - by AzureBlack](https://huggingface.co/AzureBlack/Athena-v2-6.0bit-exl2)
[AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v2-AWQ)
[OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v2-GGUF)
## Ratings:
Note: I have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!





<!-- description end -->
<!-- description start -->
## Models and loras used
- Xwin-LM/Xwin-LM-13B-V0.1
- Undi95/ReMM-v2.2-L2-13B
- Undi95/MLewd-L2-13B-v2-3
- Brouz/Slerpeno
- boomerchan/Magpie-13b
```
[Xwin (0.30) + ReMM v2.2 (0.70)](0.45) x [[Xwin (0.40) + MLewd v2-3 (0.60)](0.80) + [Slerpeno(0.50) + Magpie-13b(0.50)](0.20)](0.55)
```
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
HUGE thanks to [Undi95](https://huggingface.co/Undi95) for doing the merging (Recipe was my idea, he merged)
To TheBloke: please if you quant this, please include [IkariDev](https://huggingface.co/IkariDev) + [Undi95](https://huggingface.co/Undi95) in all the credits/links to the creator. |
NeverSleep/Mistral-11B-AirOmniMix | NeverSleep | "2023-10-14T15:36:31Z" | 1,082 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-14T01:57:46Z" | ---
license: cc-by-nc-4.0
---
Replaced Zephyr by Airoboros 2.2 in the mix.
## Description
This repo contains fp16 files of Mistral-11B-AirOmniMix.
## Model used
- [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
- [Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus)
- [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B)
- [airoboros-mistral2.2-7b](https://huggingface.co/teknium/airoboros-mistral2.2-7b)
## Prompt template
The best one after further testing is this one, since Zephyr is out of the merge:
```
USER: <prompt>
ASSISTANT:
```
But this one work too:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
Or use any prompting system from one of the 4 source model, should work.
## The secret sauce
Mistral-11B-OpenOrcaPlatypus :
```
slices:
- sources:
- model: Open-Orca/Mistral-7B-OpenOrca
layer_range: [0, 24]
- sources:
- model: akjindal53244/Mistral-7B-v0.1-Open-Platypus
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Mistral-11B-CC-Airo :
```
slices:
- sources:
- model: "/content/drive/MyDrive/CC-v1.1-7B-bf16"
layer_range: [0, 24]
- sources:
- model: "/content/drive/MyDrive/Mistral-7B-Airoboros-2.2-bf16"
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Mistral-11B-AirOmniMix :
```
slices:
- sources:
- model: Mistral-11B-OpenOrcaPlatypus
layer_range: [0, 48]
- model: Mistral-11B-CC-Airo
layer_range: [0, 48]
merge_method: slerp
base_model: Mistral-11B-OpenOrcaPlatypus
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
## Some scoring I done myself
hf-causal-experimental (pretrained=/content/drive/MyDrive/Mistral-11B-AirOmniMix), limit: None, provide_description: False, num_fewshot: 0, batch_size: 4
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5452|± |0.0146|
| | |acc_norm|0.5836|± |0.0144|
|arc_easy | 0|acc |0.8321|± |0.0077|
| | |acc_norm|0.8119|± |0.0080|
|hellaswag | 0|acc |0.6381|± |0.0048|
| | |acc_norm|0.8250|± |0.0038|
|piqa | 0|acc |0.8096|± |0.0092|
| | |acc_norm|0.8243|± |0.0089|
|truthfulqa_mc| 1|mc1 |0.3941|± |0.0171|
| | |mc2 |0.5606|± |0.0155|
|winogrande | 0|acc |0.7395|± |0.0123|

## Others
Special thanks to Sushi, [Henky](https://github.com/KoboldAI/KoboldAI-Client) for the machine he give me for big task, and [Charles Goddard](https://github.com/cg123) for his amazing tool.
If you want to support me, you can [here](https://ko-fi.com/undiai). |
zhangchuheng123/llama2-alpaca-sft-2epoch | zhangchuheng123 | "2023-10-24T05:51:26Z" | 1,082 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-19T09:28:53Z" | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Abe13/full-juni-v0.1 | Abe13 | "2023-10-31T04:39:49Z" | 1,082 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-20T15:38:13Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
---
ChatGPT: "This iteration signifies a meticulously fine-tuned version designed to seamlessly integrate new knowledge into the model's existing framework. The primary goal is to enhance the model's understanding and performance by updating its knowledge base, all while ensuring that its pre-existing capabilities are retained and not compromised." |
jondurbin/airoboros-180b-2.2.1 | jondurbin | "2023-10-25T17:44:59Z" | 1,082 | 17 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"dataset:jondurbin/airoboros-2.2.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-25T15:36:06Z" | ---
license: other
license_name: falcon-180b-tii-license-1.0
license_link: https://huggingface.co/tiiuae/falcon-180B/raw/main/LICENSE.txt
datasets:
- jondurbin/airoboros-2.2.1
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
This is a fine-tune of the falcon-180b model, bumped to 4k context via rope scaling, on the [airoboros-2.2.1 dataset](https://hf.co/datasets/jondurbin/airoboros-2.2.1)
Base model link: https://huggingface.co/tiiuae/falcon-180B
The base model has a custom license: https://huggingface.co/tiiuae/falcon-180B/raw/main/LICENSE.txt
__*Q4_0 quantization will likely not work due to quantized/unquantized (recommended) merge method!*__
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format:
```
A chat.
USER: {prompt}
ASSISTANT:
```
The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc.
Here's another example:
```
A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously.
USER: {prompt}
ASSISTANT:
```
And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf).
```
A chat between old friends: Timmy and Tommy.
{description of characters}
{setting for the chat}
Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee."
Tommy:
```
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__
### Fine tuning info
https://wandb.ai/jondurbin/airoboros-l2-70b-2.2.1/runs/f91zmwuz?workspace=user-jondurbin
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.2 models with `l2` in the name are built on top of llama-2/codellama.
The llama-2 base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me. |
NeverSleep/Echidna-13b-v0.2 | NeverSleep | "2023-10-26T21:08:04Z" | 1,082 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-26T18:21:57Z" | ---
license: cc-by-nc-4.0
---

# This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)!
Echidna v0.2 model. Use Alpaca format. Suitable for RP, ERP and general stuff.
Echidna v0.3 will be a heavily refined version of this model/recipe!
This model seems to be pretty sensitive to your generation settings, experiment until you've found your settings.
[Recommended settings - No settings yet(Please suggest some over in the Community tab!)]
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains FP16 files of Echidna-13b-v0.2.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Echidna-13b-v0.2)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2 - by waldie](https://huggingface.co/waldie/Athena-v4-8bpw-h8-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Echidna-13b-v0.2-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- description start -->
## Models+loras used and recipe
- Xwin-LM/Xwin-LM-13B-V0.2
- IkariDev/Athena-v3
- Heralax/Cat-0.5
- Undi95/PsyMedRP-v1-13B
- cgato/Thespis-13b-v0.4
- KoboldAI/LLaMA2-13B-Tiefighter
- Heralax/Augmental-13b-two-epochs
- Sao10K/SthenoWriter2.1-L2-13B
- Undi95/Storytelling-v2.1-13B-lora
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
Kooten/DaringMaid-20B | Kooten | "2024-01-19T11:57:54Z" | 1,082 | 12 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-13T22:43:00Z" | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
# DaringMaid-20B
My goal was to make a Noromaid that's smarter and better at following instructions.
After trying a bunch of different recipes I think this one turned out pretty good
- I used [sequelbox/DynamicFactor](https://huggingface.co/sequelbox/DynamicFactor) as a base to as its supposed "improve overall knowledge, precise communication, conceptual understanding, and technical skill" over the base llama2.
- [NeverSleep/Noromaid](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) of course.
- [Undi95/Utopia](https://huggingface.co/Undi95/Utopia-13B) has been recommended again recently and its still really good so in the mixer it goes
- I liked [tavtav/Rose](https://huggingface.co/tavtav/Rose-20B) so i threw in a bit of [CalderaAI/Thorns](https://huggingface.co/CalderaAI/13B-Thorns-l2)
- There was recently a model that tried to pass itself off as [Gryphe/MythoMax](https://huggingface.co/Gryphe/MythoMax-L2-13b), i made a merge with that model before it was revealed to be MythoMax and it turned out pretty good so i used it.
The .yml config files for mergekit with the exact merges can be found in the ["Recipe"](https://huggingface.co/Kooten/DaringMaid/tree/main/Recipe) folder in the [fp16 repo](https://huggingface.co/Kooten/DaringMaid-20B)
# Quants
EXL2: [6bpw](https://huggingface.co/Kooten/DaringMaid-20B-6bpw-exl2), [3bpw](https://huggingface.co/Kooten/DaringMaid-20B-3bpw-exl2)
[GGUF](https://huggingface.co/Kooten/DaringMaid-20B-GGUF):
***New quants:*** [IQ2-XS](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-IQ2_XS.gguf), [IQ2-XXS](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-IQ2_XXS.gguf)
[Q3_K_M](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-Q3_K_M.gguf) - [Q4_K_M](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-Q4_K_M.gguf) - [Q5_K_M](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-Q5_K_M.gguf)
***Additional Quants from TheBloke:*** [GPTQ](https://huggingface.co/TheBloke/DaringMaid-20B-GPTQ), [AWQ](https://huggingface.co/TheBloke/DaringMaid-20B-AWQ), [GGUF](https://huggingface.co/TheBloke/DaringMaid-20B-GGUF)
## Prompt template:
I have been using Undi/Ikaris SillyTavern presets for Noromaid: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request. Do not include descriptions of non-visual qualities such as personality, movements, scents, mental traits, or anything which could not be seen in a still photograph. Do not write in full sentences. Prefix your description with the phrase 'full body portrait,'
### Instruction:
{prompt}
### Response:
```
### Contact
Kooten on discord.
|
seungduk/Bookworm-10.7B-v0.3 | seungduk | "2024-01-01T10:58:50Z" | 1,082 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-01T10:51:26Z" | ---
license: apache-2.0
---
|
wkshin89/mistral-7b-instruct-ko-test-v0.2 | wkshin89 | "2024-01-02T02:46:42Z" | 1,082 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-02T02:44:17Z" | Entry not found |
realPCH/kosolra-wiki-QA | realPCH | "2024-01-18T01:32:54Z" | 1,082 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:maywell/ko_wikidata_QA",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-17T06:47:15Z" | ---
license: mit
datasets:
- maywell/ko_wikidata_QA
---
### Developed by chPark
### Training Strategy
We fine-tuned this model based on [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1-deprecated) with [kyujinpy/KOR-gugugu-platypus-set](https://huggingface.co/datasets/kyujinpy/KOR-gugugu-platypus-set)
### Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "realPCH/ko_solra_merge"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "[INST] Put instruction here. [/INST]"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
``` |
sunburstAI/sb_solar_ko_10.7B_v0.2 | sunburstAI | "2024-03-06T23:32:40Z" | 1,082 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-06T07:31:34Z" | ---
library_name: transformers
license: apache-2.0
---
# sb_solar_ko_10.7B_v0.2
## About the model
- This model is a fine-tuned version of [mncai/llama2-13b-dpo-v4](https://huggingface.co/mncai/llama2-13b-dpo-v4).
## Train Dataset
- ko alpaca data, ko orca style data |
notstoic/pygmalion-13b-4bit-128g | notstoic | "2023-05-18T22:55:36Z" | 1,081 | 142 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-18T22:25:44Z" | ---
language: en
license: other
commercial: no
inference: false
---
# pygmalion-13b-4bit-128g
## Model description
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
Quantized from the decoded pygmalion-13b xor format.
**https://huggingface.co/PygmalionAI/pygmalion-13b**
In safetensor format.
### Quantization Information
GPTQ CUDA quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa
```
python llama.py --wbits 4 models/pygmalion-13b c4 --true-sequential --groupsize 128 --save_safetensors models/pygmalion-13b/4bit-128g.safetensors
``` |
gaodrew/llama-2-7b-sassy-aztec-qa-13k | gaodrew | "2023-10-22T21:14:01Z" | 1,081 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-19T22:32:50Z" | ---
license: llama2
---
llama-2-7b base fine-tuned on a dataset of 13,000 questions and answers about the Incan Empire (Wikipedia entry)
Answers are designed to be snarky, concise, have an \[END\] token, and be in UPPERCASE.
|
unaidedelf87777/nexus-mistral-v1-ep2 | unaidedelf87777 | "2023-11-04T06:16:00Z" | 1,081 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-02T13:16:21Z" | ---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
---
## adding so the leaderboard accepts it. will update later after testing.\
# evaluation stats for nexus mistral:
# Data mix:
- Open-orca/SlimOrca-Dedup
- teknium/openhermes
- miscelaneous mathematics and private data.
- Conversational data from openchat?? using c-rlft?? stay tuned for when i can afford that lmao. |
vihangd/smartyplats-1.1b-v1 | vihangd | "2023-11-10T04:29:52Z" | 1,081 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-10T04:21:52Z" | ---
license: apache-2.0
---
<p><h1> SmartyPlats-1.1b V1 </h1></p>
An experimental finetune of TinyLLaMA 1T with QLoRA
<h2> Datasets </h2>
Trained on alpca style datasets
<p><h2> Prompt Template </h2></p>
Uses alpaca style prompt template |
vilm/vinallama-7b | vilm | "2023-12-19T11:10:40Z" | 1,081 | 22 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"vi",
"arxiv:2312.11011",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-28T07:45:04Z" | ---
license: llama2
language:
- vi
---
# VinaLLaMA - State-of-the-art Vietnamese LLMs

Read our [Paper](https://huggingface.co/papers/2312.11011) |
oopsung/Yi-Ko-ENW-v1 | oopsung | "2024-03-08T05:29:58Z" | 1,081 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-11T05:04:00Z" | ---
license: other
---
## **Model Details**
**Model Developers** : oopsung(Sungwoo Park), shleeeee(Seunghyeon Lee)
**Input** Models input text only.
**Output** Models generate text only.
**Base Model** [**beomi/Yi-Ko-6B**](https://huggingface.co/beomi/Yi-Ko-6B)
use SFT to train model |
rrw-x2/KoSOLAR-10.9B-v1.0 | rrw-x2 | "2024-03-21T09:55:55Z" | 1,081 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-21T09:21:54Z" | ---
library_name: transformers
tags:
- merge
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bartowski/dolphin-2.9-llama3-8b-GGUF | bartowski | "2024-05-01T18:04:24Z" | 1,081 | 4 | null | [
"gguf",
"generated_from_trainer",
"axolotl",
"text-generation",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:abacusai/SystemChat-1.1",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
] | text-generation | "2024-05-01T17:39:49Z" | ---
license: other
base_model: meta-llama/Meta-Llama-3-8B
tags:
- generated_from_trainer
- axolotl
model-index:
- name: out
results: []
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- HuggingFaceH4/ultrachat_200k
- microsoft/orca-math-word-problems-200k
- abacusai/SystemChat-1.1
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of dolphin-2.9-llama3-8b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization.
Original model: https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [dolphin-2.9-llama3-8b-Q8_0.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [dolphin-2.9-llama3-8b-Q6_K.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [dolphin-2.9-llama3-8b-Q5_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [dolphin-2.9-llama3-8b-Q5_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [dolphin-2.9-llama3-8b-Q4_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [dolphin-2.9-llama3-8b-Q4_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [dolphin-2.9-llama3-8b-IQ4_NL.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [dolphin-2.9-llama3-8b-IQ4_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [dolphin-2.9-llama3-8b-Q3_K_L.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [dolphin-2.9-llama3-8b-Q3_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [dolphin-2.9-llama3-8b-IQ3_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [dolphin-2.9-llama3-8b-IQ3_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [dolphin-2.9-llama3-8b-Q3_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [dolphin-2.9-llama3-8b-IQ3_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [dolphin-2.9-llama3-8b-IQ3_XXS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [dolphin-2.9-llama3-8b-Q2_K.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [dolphin-2.9-llama3-8b-IQ2_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [dolphin-2.9-llama3-8b-IQ2_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [dolphin-2.9-llama3-8b-IQ2_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [dolphin-2.9-llama3-8b-IQ2_XXS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [dolphin-2.9-llama3-8b-IQ1_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [dolphin-2.9-llama3-8b-IQ1_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-8b-GGUF/blob/main/dolphin-2.9-llama3-8b-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf | RichardErkhov | "2024-06-04T23:57:52Z" | 1,081 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-04T23:21:46Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-medium-vaguely-human-dialogue - GGUF
- Model creator: https://huggingface.co/pszemraj/
- Original model: https://huggingface.co/pszemraj/gpt2-medium-vaguely-human-dialogue/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-medium-vaguely-human-dialogue.Q2_K.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q2_K.gguf) | Q2_K | 0.17GB |
| [gpt2-medium-vaguely-human-dialogue.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.IQ3_XS.gguf) | IQ3_XS | 0.18GB |
| [gpt2-medium-vaguely-human-dialogue.IQ3_S.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.IQ3_S.gguf) | IQ3_S | 0.19GB |
| [gpt2-medium-vaguely-human-dialogue.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q3_K_S.gguf) | Q3_K_S | 0.19GB |
| [gpt2-medium-vaguely-human-dialogue.IQ3_M.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.IQ3_M.gguf) | IQ3_M | 0.2GB |
| [gpt2-medium-vaguely-human-dialogue.Q3_K.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q3_K.gguf) | Q3_K | 0.21GB |
| [gpt2-medium-vaguely-human-dialogue.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q3_K_M.gguf) | Q3_K_M | 0.21GB |
| [gpt2-medium-vaguely-human-dialogue.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q3_K_L.gguf) | Q3_K_L | 0.23GB |
| [gpt2-medium-vaguely-human-dialogue.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.IQ4_XS.gguf) | IQ4_XS | 0.22GB |
| [gpt2-medium-vaguely-human-dialogue.Q4_0.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q4_0.gguf) | Q4_0 | 0.23GB |
| [gpt2-medium-vaguely-human-dialogue.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.IQ4_NL.gguf) | IQ4_NL | 0.23GB |
| [gpt2-medium-vaguely-human-dialogue.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q4_K_S.gguf) | Q4_K_S | 0.23GB |
| [gpt2-medium-vaguely-human-dialogue.Q4_K.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q4_K.gguf) | Q4_K | 0.25GB |
| [gpt2-medium-vaguely-human-dialogue.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q4_K_M.gguf) | Q4_K_M | 0.25GB |
| [gpt2-medium-vaguely-human-dialogue.Q4_1.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q4_1.gguf) | Q4_1 | 0.25GB |
| [gpt2-medium-vaguely-human-dialogue.Q5_0.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q5_0.gguf) | Q5_0 | 0.27GB |
| [gpt2-medium-vaguely-human-dialogue.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q5_K_S.gguf) | Q5_K_S | 0.27GB |
| [gpt2-medium-vaguely-human-dialogue.Q5_K.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q5_K.gguf) | Q5_K | 0.29GB |
| [gpt2-medium-vaguely-human-dialogue.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q5_K_M.gguf) | Q5_K_M | 0.29GB |
| [gpt2-medium-vaguely-human-dialogue.Q5_1.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q5_1.gguf) | Q5_1 | 0.29GB |
| [gpt2-medium-vaguely-human-dialogue.Q6_K.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q6_K.gguf) | Q6_K | 0.32GB |
| [gpt2-medium-vaguely-human-dialogue.Q8_0.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_gpt2-medium-vaguely-human-dialogue-gguf/blob/main/gpt2-medium-vaguely-human-dialogue.Q8_0.gguf) | Q8_0 | 0.41GB |
Original model description:
---
language:
- en
tags:
- text-generation
- gpt2
- gpt
license: mit
widget:
- text: |+
Do you like my new haircut?
person beta:
example_title: haircut
- text: |+
I love to learn new things.. are you willing to teach me something?
person beta:
example_title: teaching
- text: |+
What's your favorite animal? Mine is the dog?
person beta:
example_title: favorite
- text: |+
how much does it cost?
person beta:
example_title: money
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.6
no_repeat_ngram_size: 3
do_sample: true
top_p: 0.85
top_k: 10
repetition_penalty: 2.1
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pszemraj/gpt2-medium-vaguely-human-dialogue
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on a parsed version of Wizard of Wikipedia. Because the batch size was so large, it learned a general understanding of words that makes sense together but does not specifically respond to anything - sort of like an alien learning to imitate human words to convince others that it is human.
It achieves the following results on the evaluation set:
- Loss: 4.3281
## Model description
- a decent example of what happens when your batch size is too large and the global optima does not reflect specific prompts / use cases.
## Intended uses & limitations
- there are no intended uses
## Training and evaluation data
- a parsed version of the wizard of Wikipedia dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 34.991 | 1.0 | 837 | 14.8359 |
| 12.2881 | 2.0 | 1674 | 9.375 |
| 8.5071 | 3.0 | 2511 | 7.2148 |
| 7.6031 | 4.0 | 3348 | 6.1758 |
| 6.4808 | 5.0 | 4185 | 5.5820 |
| 5.8562 | 6.0 | 5022 | 5.0977 |
| 5.6094 | 7.0 | 5859 | 4.8203 |
| 5.2591 | 8.0 | 6696 | 4.5977 |
| 5.0031 | 9.0 | 7533 | 4.4219 |
| 4.8837 | 10.0 | 8370 | 4.3281 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
|
mradermacher/Qwen2-0.5B-Chat_DPO-GGUF | mradermacher | "2024-06-11T22:14:00Z" | 1,081 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"en",
"base_model:JCHAVEROT/Qwen2-0.5B-Chat_DPO",
"endpoints_compatible",
"region:us"
] | null | "2024-06-11T22:11:34Z" | ---
base_model: JCHAVEROT/Qwen2-0.5B-Chat_DPO
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama-factory
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/JCHAVEROT/Qwen2-0.5B-Chat_DPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.IQ3_S.gguf) | IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.IQ3_XS.gguf) | IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.IQ3_M.gguf) | IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_DPO.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Undi95/ReasoningEngine | Undi95 | "2023-09-09T21:13:57Z" | 1,080 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-05T14:58:08Z" | ---
license: cc-by-nc-4.0
---
https://huggingface.co/stabilityai/StableBeluga-13B + https://huggingface.co/jondurbin/airoboros-lmoe-13b-2.1/tree/main/adapters/reasoning weight: 0.42 name: ReasoningEngine
For Dampf. |
quantumaikr/falcon-180B-wizard_alpaca_dolly_orca | quantumaikr | "2023-09-08T21:28:51Z" | 1,080 | 4 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"en",
"de",
"es",
"fr",
"dataset:tiiuae/falcon-refinedweb",
"dataset:nRuaif/wizard_alpaca_dolly_orca",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-07T16:31:18Z" | ---
datasets:
- tiiuae/falcon-refinedweb
- nRuaif/wizard_alpaca_dolly_orca
language:
- en
- de
- es
- fr
inference: false
license: unknown
---
# 🇰🇷 quantumaikr/falcon-180B-wizard_alpaca_dolly_orca
**quantumaikr/falcon-180B-wizard_alpaca_dolly_orca is a 180B parameters causal decoder-only model built by [quantumaikr](https://www.quantumai.kr) based on [Falcon-180B-chat](https://huggingface.co/tiiuae/falcon-180B-chat)**
## How to Get Started with the Model
To run inference with the model in full `bfloat16` precision you need approximately 8xA100 80GB or equivalent.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "quantumaikr/falcon-180B-wizard_alpaca_dolly_orca"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Contact
🇰🇷 www.quantumai.kr
🇰🇷 [email protected] [초거대언어모델 기술도입 문의환영] |
Lazycuber/L2-7b-Base-Guanaco-Vicuna | Lazycuber | "2023-09-21T12:04:11Z" | 1,080 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-21T11:56:07Z" | Entry not found |
shoppal/shoppal-v0.1-sf | shoppal | "2023-10-19T01:51:43Z" | 1,080 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"en",
"dataset:timdettmers/openassistant-guanaco",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-17T11:04:22Z" | ---
license: llama2
datasets:
- timdettmers/openassistant-guanaco
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- text-generation-inference
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sanster/PowerPaint-V1-stable-diffusion-inpainting | Sanster | "2023-12-25T10:23:51Z" | 1,080 | 5 | diffusers | [
"diffusers",
"safetensors",
"diffusers:StableDiffusionInpaintPipeline",
"region:us"
] | image-to-image | "2023-12-25T02:38:02Z" | Model from: https://huggingface.co/JunhaoZhuang/PowerPaint-v1
Based on `runwayml/stable-diffusion-inpainting`, the unet has been replaced with PowerPaint's unet,
and the token embedding (P_ctxt, P_shape, P_obj) newly added by PowerPaint has been integrated into the text_encoder.
Download python file at [here](https://huggingface.co/Sanster/PowerPaint-V1-stable-diffusion-inpainting/tree/main), then run:
```bash
python3 demo.py
``` |
peterkang/mymodel_v6 | peterkang | "2024-01-04T11:43:37Z" | 1,080 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-04T11:23:30Z" | Entry not found |
fatgong/5G78QRTTLWro79F4TQs9keoNZX5AJmDGbuQA9JkNr6aec3WE_vgg | fatgong | "2024-03-27T22:48:19Z" | 1,080 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-03-09T14:12:34Z" | Entry not found |
KBNIT/KoSOLAR-10.7B-QLoRA-NEFTune-kolon-v2.0 | KBNIT | "2024-03-27T04:51:06Z" | 1,080 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"ko",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-25T00:23:35Z" | ---
license: cc-by-nc-4.0
language:
- ko
- en
---
## Base Model:
We made a LLM model with yanolja/KoSOLAR-10.7B-v0.3
## Model Description
We use QLoRA(64, 16) and NEFTune on LLM Fine-tuning
lr=1e-5
## Train Detail
Our Korean data used, and 1 epoch train
## Others
We are making LLM model for Kolon ! |
duyntnet/Fimbulvetr-11B-v2-imatrix-GGUF | duyntnet | "2024-05-19T13:22:06Z" | 1,080 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Fimbulvetr-11B-v2",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | "2024-05-19T10:44:12Z" | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Fimbulvetr-11B-v2
---
Quantizations of https://huggingface.co/Sao10K/Fimbulvetr-11B-v2
# From original readme
Prompt Formats - Alpaca or Vicuna. Either one works fine.
Recommended SillyTavern Presets - Universal Light
Alpaca:
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
Vicuna:
```
System: <Prompt>
User: <Input>
Assistant:
``` |
RichardErkhov/sagorsarker_-_emailgenerator-gguf | RichardErkhov | "2024-06-05T01:12:11Z" | 1,080 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-05T00:47:36Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
emailgenerator - GGUF
- Model creator: https://huggingface.co/sagorsarker/
- Original model: https://huggingface.co/sagorsarker/emailgenerator/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [emailgenerator.Q2_K.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q2_K.gguf) | Q2_K | 0.17GB |
| [emailgenerator.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.IQ3_XS.gguf) | IQ3_XS | 0.18GB |
| [emailgenerator.IQ3_S.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.IQ3_S.gguf) | IQ3_S | 0.19GB |
| [emailgenerator.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q3_K_S.gguf) | Q3_K_S | 0.19GB |
| [emailgenerator.IQ3_M.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.IQ3_M.gguf) | IQ3_M | 0.2GB |
| [emailgenerator.Q3_K.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q3_K.gguf) | Q3_K | 0.21GB |
| [emailgenerator.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q3_K_M.gguf) | Q3_K_M | 0.21GB |
| [emailgenerator.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q3_K_L.gguf) | Q3_K_L | 0.23GB |
| [emailgenerator.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.IQ4_XS.gguf) | IQ4_XS | 0.22GB |
| [emailgenerator.Q4_0.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q4_0.gguf) | Q4_0 | 0.23GB |
| [emailgenerator.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.IQ4_NL.gguf) | IQ4_NL | 0.23GB |
| [emailgenerator.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q4_K_S.gguf) | Q4_K_S | 0.23GB |
| [emailgenerator.Q4_K.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q4_K.gguf) | Q4_K | 0.25GB |
| [emailgenerator.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q4_K_M.gguf) | Q4_K_M | 0.25GB |
| [emailgenerator.Q4_1.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q4_1.gguf) | Q4_1 | 0.25GB |
| [emailgenerator.Q5_0.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q5_0.gguf) | Q5_0 | 0.27GB |
| [emailgenerator.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q5_K_S.gguf) | Q5_K_S | 0.27GB |
| [emailgenerator.Q5_K.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q5_K.gguf) | Q5_K | 0.29GB |
| [emailgenerator.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q5_K_M.gguf) | Q5_K_M | 0.29GB |
| [emailgenerator.Q5_1.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q5_1.gguf) | Q5_1 | 0.29GB |
| [emailgenerator.Q6_K.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q6_K.gguf) | Q6_K | 0.32GB |
| [emailgenerator.Q8_0.gguf](https://huggingface.co/RichardErkhov/sagorsarker_-_emailgenerator-gguf/blob/main/emailgenerator.Q8_0.gguf) | Q8_0 | 0.41GB |
Original model description:
---
language: en
tags:
- email-generation
license: mit
---
EmailGenerator is a gpt-2 fine-tuned text-generation pre-trained model trained on [emailblog](https://www.kaggle.com/datasets/mikeschmidtavemac/emailblog) datasets for [EmailWriter](https://github.com/sagorbrur/EmailWriter) repositories.
For details about this model check [EmailWriter](https://github.com/sagorbrur/EmailWriter) repository.
|
Xcvddax/llama2-chat-attck-gguf | Xcvddax | "2024-06-25T15:13:49Z" | 1,080 | 0 | adapter-transformers | [
"adapter-transformers",
"gguf",
"text-generation",
"en",
"dataset:Xcvddax/Attack-combine",
"license:llama2",
"region:us"
] | text-generation | "2024-06-06T15:55:07Z" | ---
license: llama2
datasets:
- Xcvddax/Attack-combine
language:
- en
library_name: adapter-transformers
pipeline_tag: text-generation
--- |
sander-wood/text-to-music | sander-wood | "2023-08-19T06:17:31Z" | 1,079 | 135 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"music",
"en",
"arxiv:2211.11216",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-11-21T08:43:55Z" | ---
license: mit
language: en
widget:
- text: This is a traditional Irish dance music.
inference:
parameters:
top_p: 0.9
max_length: 1024
do_sample: true
tags:
- music
---
# Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task
## Model description
This language-music model takes [BART-base](https://huggingface.co/facebook/bart-base) fine-tunes on 282,870 English text-music pairs, where all scores are represented in ABC notation. It was introduced in the paper [Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task](https://arxiv.org/abs/2211.11216) by Wu et al. and released in [this repository](https://github.com/sander-wood/text-to-music).
It is capable of generating complete and semantically consistent sheet music directly from descriptions in natural language based on text. To the best of our knowledge, this is the first model that achieves text-conditional symbolic music generation which is trained on real text-music pairs, and the music is generated entirely by the model and without any hand-crafted rules.
This language-music model is available for online use and experience on [Textune: Generating Tune from Text](https://huggingface.co/spaces/sander-wood/text-to-music). With this online platform, you can easily input your desired text descriptions and receive a generated sheet music output from the model.
Due to copyright reasons, we are unable to publicly release the training dataset of this model. Instead, we have made available the [WikiMusicText](https://huggingface.co/datasets/sander-wood/wikimusictext) (WikiMT) dataset, which includes 1010 pairs of text-music data and can be used to evaluate the performance of language-music models.
## Intended uses & limitations
You can use this model for text-conditional music generation. All scores generated by this model can be written on one stave (for vocal solo or instrumental solo) in standard classical notation, and are in a variety of styles, e.g., blues, classical, folk, jazz, pop, and world music. We recommend using the script in [this repository](https://github.com/sander-wood/text-to-music) for inference. The generated tunes are in ABC notation, and can be converted to sheet music or audio using [this website](https://abc.rectanglered.com/), or [this software](https://easyabc.sourceforge.net/).
Its creativity is limited, can not perform well on tasks requiring a high degree of creativity (e.g., melody style transfer), and it is input-sensitive. For more information, please check [our paper](https://arxiv.org/abs/2211.11216).
### How to use
Here is how to use this model in PyTorch:
```python
import torch
from samplings import top_p_sampling, temperature_sampling
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained('sander-wood/text-to-music')
model = AutoModelForSeq2SeqLM.from_pretrained('sander-wood/text-to-music')
model = model
max_length = 1024
top_p = 0.9
temperature = 1.0
text = "This is a traditional Irish dance music."
input_ids = tokenizer(text,
return_tensors='pt',
truncation=True,
max_length=max_length)['input_ids']
decoder_start_token_id = model.config.decoder_start_token_id
eos_token_id = model.config.eos_token_id
decoder_input_ids = torch.tensor([[decoder_start_token_id]])
for t_idx in range(max_length):
outputs = model(input_ids=input_ids,
decoder_input_ids=decoder_input_ids)
probs = outputs.logits[0][-1]
probs = torch.nn.Softmax(dim=-1)(probs).detach().numpy()
sampled_id = temperature_sampling(probs=top_p_sampling(probs,
top_p=top_p,
return_probs=True),
temperature=temperature)
decoder_input_ids = torch.cat((decoder_input_ids, torch.tensor([[sampled_id]])), 1)
if sampled_id!=eos_token_id:
continue
else:
tune = "X:1\n"
tune += tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True)
print(tune)
break
```
### Generation Examples
Here are some examples generated by this model without cherry-picking.
```
######################## INPUT TEXT ########################
This is a traditional Irish dance music.
Note Length-1/8
Meter-6/8
Key-D
####################### OUTPUT TUNES #######################
X:1
L:1/8
M:6/8
K:D
A | BEE BEE | Bdf edB | BAF FEF | DFA BAF | BEE BEE | Bdf edB | BAF DAF | FED E2 :: A |
Bef gfe | faf edB | BAF FEF | DFA BAF | Bef gfe | faf edB | BAF DAF | FED E2 :|
X:2
L:1/8
M:6/8
K:D
A |: DED F2 A | d2 f ecA | G2 B F2 A | E2 F GFE | DED F2 A | d2 f ecA | Bgf edc |1 d3 d2 A :|2
d3 d2 a || a2 f d2 e | f2 g agf | g2 e c2 d | e2 f gfe | fed gfe | agf bag | fed cde | d3 d2 a |
agf fed | Adf agf | gfe ecA | Ace gfe | fed gfe | agf bag | fed cde | d3 d2 ||
X:3
L:1/8
M:6/8
K:D
BEE BEE | Bdf edB | BAF FEF | DFA dBA | BEE BEE | Bdf edB | BAF FEF |1 DED DFA :|2 DED D2 e |:
faf edB | BAF DFA | BAF FEF | DFA dBA | faf edB | BAF DFA | BdB AFA |1 DED D2 e :|2 DED DFA ||
```
```
######################## INPUT TEXT ########################
This is a jazz-swing lead sheet with chord and vocal.
####################### OUTPUT TUNES #######################
X:1
L:1/8
M:4/4
K:F
"F" CFG |"F" A6 z G |"Fm7" A3 G"Bb7" A3 G |"F" A6 z G |"F7" A4"Eb7" G4 |"F" F6 z F |
"Dm" A3 G"Dm/C" A3 G |"Bb" A2"Gm" B2"C7" G3 G |"F" F8- |"Dm7""G7" F6 z2 |"C" C4 C3 C |
"C7" C2 B,2"F" C4 |"F" C4 C3 C |"Dm" D2 C2"Dm/C" D4 |"Bb" D4 D3 D |"Bb" D2 C2"C7" D4 |"F" C8- |
"F" C4"Gm" z C"C7" FG |"F" A6 z G |"Fm7" A3 G"Bb7" A3 G |"F" A6 z G |"F7" A4"Eb7" G4 |"F" F6 z F |
"Dm" A3 G"Dm/C" A3 G |"Bb" A2"Gm" B2"C7" G3 G |"F" F8- |"F" F6 z2 |]
X:2
L:1/4
M:4/4
K:F
"^A""F" A3 A |"Am7" A2"D7" A2 |"Gm7" G2"C7" G A |"F" F4 |"F" A3 A |"Am7" A2"D7" A2 |"Gm7" G2"C7" G A |
"F" F4 |"Gm" B3 B |"Am7" B2"D7" B2 |"Gm" B2"D7" B A |"Gm7" G4 |"F" A3 A |"Am7" A2"D7" A2 |
"Gm7" G2"C7" G A |"F" F4 |"Bb7" F3 G |"F" A2 A2 |"Gm" B2"C7" B2 |"F" c2"D7" c c |"Gm7" c2"C7" B2 |
"F" A2"F7" A2 |"Bb" B2"F" B A |"Bb" B2"F" B A |"Gm" B2"F" B A |"Gm7" B2"F" B A |"Gm7" B2"F" B A |
"C7" B2 c2 |"F""Bb7" A4 |"F""Bb7" z4 |]
X:3
L:1/4
M:4/4
K:Bb
B, ||"Gm""^A1" G,2 B, D |"D7" ^F A2 G/=F/ |"Gm" G2"Cm7" B c |"F7" A2 G =F |"Bb" D2 F A |
"Cm7" c e2 d/c/ |"Gm7" B3/2 G/-"C7" G2- |"F7" G2 z B, |"Gm""^B" G,2 B, D |"D7" ^F A2 G/=F/ |
"Gm" G2"Cm7" B c |"F7" A2 G =F |"Bb" D2 F A |"Cm7" c e2 d/c/ |"Gm7" B3/2 G/-"C7" G2- |"F7" G2 z2 ||
"^C""F7""^A2" F4- | F E D C |"Bb" D2 F B | d3 c/B/ |"F" A2"Cm7" G2 |"D7" ^F2 G2 |"Gm" B3"C7" A |
"F7" G4 ||"F7""^A3" F4- | F E D C |"Bb" D2 F B | d3 c/B/ |"F" A2"Cm7" G2 |"D7" ^F2 G2 |"Gm" B3 A |
"C7" G4 ||"^B""Gm""^C" B2 c B |"Cm" c B c B |"Gm7" c2 B A |"C7" B3 A |"Bb" B2 c B |"G7" d c B A |
"Cm" G2 A G |"F7" F2 z G ||"^C""F7" F F3 |"Bb" D D3 |"Cm" E E3 |"D7" ^F F3 |"Gm" G2 A B |"C7" d3 d |
"Gm" d3 d |"D7" d3 B, ||"^D""Gm" G,2 B, D |"D7" ^F A2 G/=F/ |"Gm" G2"Cm7" B c |"F7" A2 G =F |
"Bb" D2 F A |"Cm7" c e2 d/c/ |"Gm7" B3/2 G/-"C7" G2- |"F7" G2 z2 |]
```
```
######################## INPUT TEXT ########################
This is a Chinese folk song from the Jiangnan region. It was created during the Qianlong era (1735-1796) of the Qing dynasty. Over time, many regional variations were created, and the song gained popularity both in China and abroad. One version of the song describes a custom of giving jasmine flowers, popular in the southern Yangtze delta region of China.
####################### OUTPUT TUNES #######################
X:1
L:1/8
Q:1/4=100
M:2/4
K:C
"^Slow" DA A2 | GA c2- | c2 G2 | c2 GF | GA/G/ F2 | E2 DC | DA A2 | GA c2- | c2 GA | cd- d2 |
cA c2- | c2 GA | cd- d2 | cA c2- | c2 GA | c2 A2 | c2 d2 | cA c2- | c2 c2 | A2 G2 | F2 AG | F2 ED |
CA,/C/ D2- | D2 CD | F2 A2 | G2 ED | CG A2 | G2 FD | CA,/C/ D2- | D2 CD | F2 A2 | G2 ED |
CG A2 | G2 FD | CA,/C/ D2- | D2 z2 :|
X:2
L:1/8
Q:1/4=100
M:2/4
K:C
"^ MDolce" Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | EG ed | c2 AG | cA cd |
A2 AG | E2 ED | CD E2- | E2 z2 |"^ howeveroda" Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- |
E2 z2 | A2 cA | GA E2- | E2 z2 | GA cd | e2 ed | cd e2- | e2 z2 | ge d2 | cd c2- | c2 z2 |
Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | EG ed | c2 AG | cA cd | A2 AG | E2 ED |
CD E2- | E2 z2 |"^DDtisata" Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | A2 cA |
GA E2- | E2 z2 | GA cd | e2 ed | cd e2- | e2 z2 | ge d2 | cd c2- | c2 z2 | Ac de | d2 AG |
cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 |
Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 |"^ Easy" Ac de | d2 AG | cA cd |
A2 AG | E2 ED | CD E2- | E2 z2 | Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 |]
X:3
L:1/8
Q:1/4=60
M:4/4
K:C
"^S books defe.." AA A2 cdcc | AcAG A4- | A8 | A,4 CD C2 | A,4 cdcA | A2 GA- A4- | A2 GA A2 AA |
AG E2 D2 C2 | D6 ED | C2 D4 C2 | D2 C2 D4 | C2 A,2 CD C2 | A,4 cdcA | A2 GA- A4- | A2 GA A2 AA |
AG E2 D2 C2 | D6 z2 |]
```
### BibTeX entry and citation info
```bibtex
@inproceedings{
wu2023exploring,
title={Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task},
author={Shangda Wu and Maosong Sun},
booktitle={The AAAI-23 Workshop on Creative AI Across Modalities},
year={2023},
url={https://openreview.net/forum?id=QmWXskBhesn}
}
``` |
Monero/Pygmalion-Metharme-7b-4bit-TopScore | Monero | "2023-05-05T22:03:08Z" | 1,079 | 45 | transformers | [
"transformers",
"llama",
"text-generation",
"text generation",
"instruct",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-30T03:04:16Z" | ---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Metharme 7B 4bit</h1>
<h2 style="text-align: center">An instruction-tuned LLaMA biased towards fiction writing and conversation.</h2>
## Model Details
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/metharme-7b
Quantized for KoboldAI use using https://github.com/0cc4m/GPTQ-for-LLaMa
I created several dozen quantized variations of this model and believe this variation to be "best." <br>
<html>
<head>
<style>
table {
border:1px solid #b3adad;
border-collapse:collapse;
padding:5px;
}
table th {
border:1px solid #b3adad;
padding:5px;
background: #f0f0f0;
color: #313030;
}
table td {
border:1px solid #b3adad;
text-align:center;
padding:5px;
background: #ffffff;
color: #313030;
}
</style>
</head>
<body>
<table>
<thead>
<tr>
<th>Model:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Metharme 7b - 16bit</td>
<td>5.7208476066589355</td>
<td>41.61103439331055</td>
<td>7.512405872344971</td>
</tr>
<tr>
<td>Metharme-AO-TS-Trits-damp0.1</td>
<td>6.172733783721924</td>
<td>45.19890213012695</td>
<td>7.872506141662598</td>
</tr>
</tbody>
</table>
</body>
</html>
<br><b>Other benchmark scores at the bottom of readme.</b>
<hr>
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
It was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions attached.
## Prompting
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
### Prompting example
Here's a concrete example. Let's say you have the following system prompt:
> This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.
And the user has typed in the following message:
> Start!
To perform inference against the model, the prompt should be formatted like this:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>
```
The model might generate something like:
> You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
>
> - Go inside the house.
> - Ignore the call and move away.
> - Run as fast as you can.
It will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
Pretend then that the user has replied with `go inside`. The next prompt should then be:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
- Go inside the house.
- Ignore the call and move away.
- Run as fast as you can.<|user|>go inside<|model|>
```
Which might generate something like:
> You enter the house, and immediately, a wave of unsettling energy washes over you. Your senses seem heightened, but your mind feels foggy. A figure appears before you - a tall man with piercing eyes. He speaks to you in a language you don't understand.
>
> - Ask him what he wants.
> - Attack him.
> - Run away.
Same process applies. Usually, it is best to do a sliding window over the user and model turns, but keep the system prompt fixed at the start of the context window.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
<hr>
<p><strong><font size="5">Click to Expand Benchmarks of different quantized variations</font></strong></p>
<strong><font size="4">The lower the number, the better the score.</font></strong>
<html>
<body>
<details>
<summary>Benchmarks Sorted by C4-New score</summary>
<table>
<thead>
<tr>
<th>GPTQ Variation:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Metharme-7b-16bit</td>
<td>5.7208476066589355</td>
<td>41.61103439331055</td>
<td>7.512405872344971</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits-damp0.1</td>
<td>6.172733783721924</td>
<td>45.19890213012695</td>
<td>7.872506141662598</td>
</tr>
<tr>
<td>Metharme-ao-trits-damp0.1</td>
<td>6.163661956787109</td>
<td>46.50249099731445</td>
<td>7.877425193786621</td>
</tr>
<tr>
<td>Metharme-ao-ts-damp0.1</td>
<td>6.184001445770264</td>
<td>46.17180633544922</td>
<td>7.880400657653809</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-damp0.1</td>
<td>6.220707893371582</td>
<td>47.82929611206055</td>
<td>7.884565353393555</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits</td>
<td>6.310682773590088</td>
<td>46.4483757019043</td>
<td>7.898126602172852</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao</td>
<td>6.281311511993408</td>
<td>46.79158401489258</td>
<td>7.906069755554199</td>
</tr>
<tr>
<td>Metharme-ao-trits</td>
<td>6.283935546875</td>
<td>46.57590103149414</td>
<td>7.907411575317383</td>
</tr>
<tr>
<td>Metharme-ao-ts</td>
<td>6.329496383666992</td>
<td>46.88129806518555</td>
<td>7.910323143005371</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits-damp0.1</td>
<td>6.232576370239258</td>
<td>48.081459045410156</td>
<td>7.95023250579834</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-damp0.1</td>
<td>6.210323333740234</td>
<td>47.66789245605469</td>
<td>7.952476978302002</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits-damp0.1</td>
<td>6.329384803771973</td>
<td>48.06882858276367</td>
<td>7.959168910980225</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits</td>
<td>6.471063137054443</td>
<td>49.650611877441406</td>
<td>7.969552040100098</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym</td>
<td>6.460526943206787</td>
<td>47.190460205078125</td>
<td>7.9732160568237305</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits</td>
<td>6.390106678009033</td>
<td>48.15375900268555</td>
<td>7.9804582595825195</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits</td>
<td>6.632943153381348</td>
<td>47.973228454589844</td>
<td>8.013848304748535</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-damp0.1</td>
<td>6.274552822113037</td>
<td>47.35737228393555</td>
<td>8.06270980834961</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits-damp0.1</td>
<td>6.266031265258789</td>
<td>47.346702575683594</td>
<td>8.068148612976074</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-damp0.1</td>
<td>6.107605934143066</td>
<td>47.91380310058594</td>
<td>8.068695068359375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits-damp0.1</td>
<td>6.128157138824463</td>
<td>48.04175567626953</td>
<td>8.0708646774292</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-damp0.1</td>
<td>6.219024658203125</td>
<td>45.834869384765625</td>
<td>8.071272850036621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits</td>
<td>7.017086029052734</td>
<td>45.04129409790039</td>
<td>8.074845314025879</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits-damp0.1</td>
<td>6.109438896179199</td>
<td>47.35737228393555</td>
<td>8.075060844421387</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits-damp0.1</td>
<td>6.118431568145752</td>
<td>45.67333221435547</td>
<td>8.077078819274902</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g</td>
<td>6.902080535888672</td>
<td>50.237754821777344</td>
<td>8.081602096557617</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g</td>
<td>6.424218654632568</td>
<td>48.48588943481445</td>
<td>8.089512825012207</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits</td>
<td>6.82415771484375</td>
<td>48.82029724121094</td>
<td>8.090987205505371</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-damp0.1</td>
<td>6.566899299621582</td>
<td>48.0670166015625</td>
<td>8.095841407775879</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits-damp0.1</td>
<td>6.289113521575928</td>
<td>46.06787109375</td>
<td>8.122251510620117</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym</td>
<td>6.518134117126465</td>
<td>49.66925811767578</td>
<td>8.13516616821289</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits</td>
<td>6.206963539123535</td>
<td>46.88833999633789</td>
<td>8.13610553741455</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-damp0.1</td>
<td>6.242006301879883</td>
<td>45.30938720703125</td>
<td>8.14249324798584</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym</td>
<td>6.387663841247559</td>
<td>48.07244110107422</td>
<td>8.173730850219727</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits-damp0.1</td>
<td>6.262309551239014</td>
<td>47.80055618286133</td>
<td>8.192194938659668</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g</td>
<td>10.206376075744629</td>
<td>49.00401306152344</td>
<td>8.198845863342285</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-damp0.1</td>
<td>6.17774772644043</td>
<td>46.47630310058594</td>
<td>8.20170783996582</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits-damp0.1</td>
<td>6.225503921508789</td>
<td>53.12746047973633</td>
<td>8.240595817565918</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits</td>
<td>8.68796443939209</td>
<td>49.73833465576172</td>
<td>8.2406587600708</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-damp0.1</td>
<td>6.584965705871582</td>
<td>55.20026397705078</td>
<td>8.268644332885742</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits</td>
<td>7.350858688354492</td>
<td>44.25314712524414</td>
<td>8.274221420288086</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits-damp0.1</td>
<td>6.585468769073486</td>
<td>51.55869674682617</td>
<td>8.2803316116333</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits</td>
<td>6.756448745727539</td>
<td>51.510311126708984</td>
<td>8.292160987854004</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-damp0.1</td>
<td>6.379064083099365</td>
<td>52.17233657836914</td>
<td>8.316649436950684</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits</td>
<td>7.056288242340088</td>
<td>48.983768463134766</td>
<td>8.339276313781738</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g</td>
<td>9.475017547607422</td>
<td>52.358829498291016</td>
<td>8.340700149536133</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym</td>
<td>6.9575653076171875</td>
<td>49.356834411621094</td>
<td>8.35644817352295</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym</td>
<td>6.819341659545898</td>
<td>55.28740310668945</td>
<td>8.377721786499023</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-damp0.1</td>
<td>6.7783050537109375</td>
<td>51.81301498413086</td>
<td>8.621373176574707</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits-damp0.1</td>
<td>6.631694793701172</td>
<td>51.7371711730957</td>
<td>8.656966209411621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-damp0.1</td>
<td>6.495014190673828</td>
<td>49.39763641357422</td>
<td>8.68167781829834</td>
</tr>
<tr>
<td>Metharme-sym-damp0.1</td>
<td>6.896804332733154</td>
<td>57.4250602722168</td>
<td>8.703770637512207</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym</td>
<td>7.270263671875</td>
<td>54.35262680053711</td>
<td>8.787986755371094</td>
</tr>
<tr>
<td>Metharme-7b-4bit-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-damp0.1</td>
<td>6.7517595291137695</td>
<td>54.06147384643555</td>
<td>8.821818351745605</td>
</tr>
<tr>
<td>Metharme-7b-4bit-alone</td>
<td>6.997134685516357</td>
<td>58.87525177001953</td>
<td>8.824191093444824</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits</td>
<td>7.2306809425354</td>
<td>66.78710174560547</td>
<td>8.879831314086914</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits-damp0.1</td>
<td>6.886506080627441</td>
<td>64.72743225097656</td>
<td>8.880627632141113</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts</td>
<td>7.735969543457031</td>
<td>62.92238235473633</td>
<td>8.913650512695312</td>
</tr>
<tr>
<td>Metharme-sym-trits-damp0.1</td>
<td>7.075908184051514</td>
<td>59.13897705078125</td>
<td>8.919178009033203</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits</td>
<td>7.599876403808594</td>
<td>55.75454330444336</td>
<td>8.932201385498047</td>
</tr>
<tr>
<td>Metharme-sym-trits</td>
<td>7.494253635406494</td>
<td>63.320709228515625</td>
<td>8.969240188598633</td>
</tr>
<tr>
<td>Metharme-sym</td>
<td>7.585672855377197</td>
<td>61.01168441772461</td>
<td>9.032520294189453</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-128g</td>
<td>251321.265625</td>
<td>250117.859375</td>
<td>232929.234375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-32g</td>
<td>275425.5</td>
<td>267733.25</td>
<td>254506.71875</td>
</tr>
</tbody>
</table>
</details>
<details>
<summary>Benchmarks Sorted by Wikitext2</summary>
<table>
<thead>
<tr>
<th>GPTQ Variation:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Metharme-7b-16bit</td>
<td>5.7208476066589355</td>
<td>41.61103439331055</td>
<td>7.512405872344971</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-damp0.1</td>
<td>6.107605934143066</td>
<td>47.91380310058594</td>
<td>8.068695068359375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits-damp0.1</td>
<td>6.109438896179199</td>
<td>47.35737228393555</td>
<td>8.075060844421387</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits-damp0.1</td>
<td>6.118431568145752</td>
<td>45.67333221435547</td>
<td>8.077078819274902</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits-damp0.1</td>
<td>6.128157138824463</td>
<td>48.04175567626953</td>
<td>8.0708646774292</td>
</tr>
<tr>
<td>Metharme-ao-trits-damp0.1</td>
<td>6.163661956787109</td>
<td>46.50249099731445</td>
<td>7.877425193786621</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits-damp0.1</td>
<td>6.172733783721924</td>
<td>45.19890213012695</td>
<td>7.872506141662598</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-damp0.1</td>
<td>6.17774772644043</td>
<td>46.47630310058594</td>
<td>8.20170783996582</td>
</tr>
<tr>
<td>Metharme-ao-ts-damp0.1</td>
<td>6.184001445770264</td>
<td>46.17180633544922</td>
<td>7.880400657653809</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits</td>
<td>6.206963539123535</td>
<td>46.88833999633789</td>
<td>8.13610553741455</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-damp0.1</td>
<td>6.210323333740234</td>
<td>47.66789245605469</td>
<td>7.952476978302002</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-damp0.1</td>
<td>6.219024658203125</td>
<td>45.834869384765625</td>
<td>8.071272850036621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-damp0.1</td>
<td>6.220707893371582</td>
<td>47.82929611206055</td>
<td>7.884565353393555</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits-damp0.1</td>
<td>6.225503921508789</td>
<td>53.12746047973633</td>
<td>8.240595817565918</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits-damp0.1</td>
<td>6.232576370239258</td>
<td>48.081459045410156</td>
<td>7.95023250579834</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-damp0.1</td>
<td>6.242006301879883</td>
<td>45.30938720703125</td>
<td>8.14249324798584</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits-damp0.1</td>
<td>6.262309551239014</td>
<td>47.80055618286133</td>
<td>8.192194938659668</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits-damp0.1</td>
<td>6.266031265258789</td>
<td>47.346702575683594</td>
<td>8.068148612976074</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-damp0.1</td>
<td>6.274552822113037</td>
<td>47.35737228393555</td>
<td>8.06270980834961</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao</td>
<td>6.281311511993408</td>
<td>46.79158401489258</td>
<td>7.906069755554199</td>
</tr>
<tr>
<td>Metharme-ao-trits</td>
<td>6.283935546875</td>
<td>46.57590103149414</td>
<td>7.907411575317383</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits-damp0.1</td>
<td>6.289113521575928</td>
<td>46.06787109375</td>
<td>8.122251510620117</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits</td>
<td>6.310682773590088</td>
<td>46.4483757019043</td>
<td>7.898126602172852</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits-damp0.1</td>
<td>6.329384803771973</td>
<td>48.06882858276367</td>
<td>7.959168910980225</td>
</tr>
<tr>
<td>Metharme-ao-ts</td>
<td>6.329496383666992</td>
<td>46.88129806518555</td>
<td>7.910323143005371</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-damp0.1</td>
<td>6.379064083099365</td>
<td>52.17233657836914</td>
<td>8.316649436950684</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym</td>
<td>6.387663841247559</td>
<td>48.07244110107422</td>
<td>8.173730850219727</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits</td>
<td>6.390106678009033</td>
<td>48.15375900268555</td>
<td>7.9804582595825195</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g</td>
<td>6.424218654632568</td>
<td>48.48588943481445</td>
<td>8.089512825012207</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym</td>
<td>6.460526943206787</td>
<td>47.190460205078125</td>
<td>7.9732160568237305</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits</td>
<td>6.471063137054443</td>
<td>49.650611877441406</td>
<td>7.969552040100098</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-7b-4bit-damp0.1</td>
<td>6.495014190673828</td>
<td>49.39763641357422</td>
<td>8.68167781829834</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym</td>
<td>6.518134117126465</td>
<td>49.66925811767578</td>
<td>8.13516616821289</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-damp0.1</td>
<td>6.566899299621582</td>
<td>48.0670166015625</td>
<td>8.095841407775879</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-damp0.1</td>
<td>6.584965705871582</td>
<td>55.20026397705078</td>
<td>8.268644332885742</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits-damp0.1</td>
<td>6.585468769073486</td>
<td>51.55869674682617</td>
<td>8.2803316116333</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits-damp0.1</td>
<td>6.631694793701172</td>
<td>51.7371711730957</td>
<td>8.656966209411621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits</td>
<td>6.632943153381348</td>
<td>47.973228454589844</td>
<td>8.013848304748535</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-damp0.1</td>
<td>6.7517595291137695</td>
<td>54.06147384643555</td>
<td>8.821818351745605</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits</td>
<td>6.756448745727539</td>
<td>51.510311126708984</td>
<td>8.292160987854004</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-damp0.1</td>
<td>6.7783050537109375</td>
<td>51.81301498413086</td>
<td>8.621373176574707</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym</td>
<td>6.819341659545898</td>
<td>55.28740310668945</td>
<td>8.377721786499023</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits</td>
<td>6.82415771484375</td>
<td>48.82029724121094</td>
<td>8.090987205505371</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits-damp0.1</td>
<td>6.886506080627441</td>
<td>64.72743225097656</td>
<td>8.880627632141113</td>
</tr>
<tr>
<td>Metharme-sym-damp0.1</td>
<td>6.896804332733154</td>
<td>57.4250602722168</td>
<td>8.703770637512207</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g</td>
<td>6.902080535888672</td>
<td>50.237754821777344</td>
<td>8.081602096557617</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym</td>
<td>6.9575653076171875</td>
<td>49.356834411621094</td>
<td>8.35644817352295</td>
</tr>
<tr>
<td>Metharme-7b-4bit-alone</td>
<td>6.997134685516357</td>
<td>58.87525177001953</td>
<td>8.824191093444824</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits</td>
<td>7.017086029052734</td>
<td>45.04129409790039</td>
<td>8.074845314025879</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits</td>
<td>7.056288242340088</td>
<td>48.983768463134766</td>
<td>8.339276313781738</td>
</tr>
<tr>
<td>Metharme-sym-trits-damp0.1</td>
<td>7.075908184051514</td>
<td>59.13897705078125</td>
<td>8.919178009033203</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits</td>
<td>7.2306809425354</td>
<td>66.78710174560547</td>
<td>8.879831314086914</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym</td>
<td>7.270263671875</td>
<td>54.35262680053711</td>
<td>8.787986755371094</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits</td>
<td>7.350858688354492</td>
<td>44.25314712524414</td>
<td>8.274221420288086</td>
</tr>
<tr>
<td>Metharme-sym-trits</td>
<td>7.494253635406494</td>
<td>63.320709228515625</td>
<td>8.969240188598633</td>
</tr>
<tr>
<td>Metharme-sym</td>
<td>7.585672855377197</td>
<td>61.01168441772461</td>
<td>9.032520294189453</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits</td>
<td>7.599876403808594</td>
<td>55.75454330444336</td>
<td>8.932201385498047</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts</td>
<td>7.735969543457031</td>
<td>62.92238235473633</td>
<td>8.913650512695312</td>
</tr>
<tr>
<td>Metharme-7b-4bit-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits</td>
<td>8.68796443939209</td>
<td>49.73833465576172</td>
<td>8.2406587600708</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g</td>
<td>9.475017547607422</td>
<td>52.358829498291016</td>
<td>8.340700149536133</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g</td>
<td>10.206376075744629</td>
<td>49.00401306152344</td>
<td>8.198845863342285</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-128g</td>
<td>251321.265625</td>
<td>250117.859375</td>
<td>232929.234375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-32g</td>
<td>275425.5</td>
<td>267733.25</td>
<td>254506.71875</td>
</tr>
</tbody>
</table>
</details>
<details>
<summary>Benchmarks Sorted by PTB-new Score</summary>
<table>
<thead>
<tr>
<th>GPTQ Variation:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Metharme-7b-16bit</td>
<td>5.7208476066589355</td>
<td>41.61103439331055</td>
<td>7.512405872344971</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits</td>
<td>7.350858688354492</td>
<td>44.25314712524414</td>
<td>8.274221420288086</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits</td>
<td>7.017086029052734</td>
<td>45.04129409790039</td>
<td>8.074845314025879</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits-damp0.1</td>
<td>6.172733783721924</td>
<td>45.19890213012695</td>
<td>7.872506141662598</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-damp0.1</td>
<td>6.242006301879883</td>
<td>45.30938720703125</td>
<td>8.14249324798584</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits-damp0.1</td>
<td>6.118431568145752</td>
<td>45.67333221435547</td>
<td>8.077078819274902</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-damp0.1</td>
<td>6.219024658203125</td>
<td>45.834869384765625</td>
<td>8.071272850036621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits-damp0.1</td>
<td>6.289113521575928</td>
<td>46.06787109375</td>
<td>8.122251510620117</td>
</tr>
<tr>
<td>Metharme-ao-ts-damp0.1</td>
<td>6.184001445770264</td>
<td>46.17180633544922</td>
<td>7.880400657653809</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits</td>
<td>6.310682773590088</td>
<td>46.4483757019043</td>
<td>7.898126602172852</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-damp0.1</td>
<td>6.17774772644043</td>
<td>46.47630310058594</td>
<td>8.20170783996582</td>
</tr>
<tr>
<td>Metharme-ao-trits-damp0.1</td>
<td>6.163661956787109</td>
<td>46.50249099731445</td>
<td>7.877425193786621</td>
</tr>
<tr>
<td>Metharme-ao-trits</td>
<td>6.283935546875</td>
<td>46.57590103149414</td>
<td>7.907411575317383</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao</td>
<td>6.281311511993408</td>
<td>46.79158401489258</td>
<td>7.906069755554199</td>
</tr>
<tr>
<td>Metharme-ao-ts</td>
<td>6.329496383666992</td>
<td>46.88129806518555</td>
<td>7.910323143005371</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits</td>
<td>6.206963539123535</td>
<td>46.88833999633789</td>
<td>8.13610553741455</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym</td>
<td>6.460526943206787</td>
<td>47.190460205078125</td>
<td>7.9732160568237305</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits-damp0.1</td>
<td>6.266031265258789</td>
<td>47.346702575683594</td>
<td>8.068148612976074</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-damp0.1</td>
<td>6.274552822113037</td>
<td>47.35737228393555</td>
<td>8.06270980834961</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits-damp0.1</td>
<td>6.109438896179199</td>
<td>47.35737228393555</td>
<td>8.075060844421387</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-damp0.1</td>
<td>6.210323333740234</td>
<td>47.66789245605469</td>
<td>7.952476978302002</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits-damp0.1</td>
<td>6.262309551239014</td>
<td>47.80055618286133</td>
<td>8.192194938659668</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-damp0.1</td>
<td>6.220707893371582</td>
<td>47.82929611206055</td>
<td>7.884565353393555</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-damp0.1</td>
<td>6.107605934143066</td>
<td>47.91380310058594</td>
<td>8.068695068359375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits</td>
<td>6.632943153381348</td>
<td>47.973228454589844</td>
<td>8.013848304748535</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits-damp0.1</td>
<td>6.128157138824463</td>
<td>48.04175567626953</td>
<td>8.0708646774292</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-damp0.1</td>
<td>6.566899299621582</td>
<td>48.0670166015625</td>
<td>8.095841407775879</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits-damp0.1</td>
<td>6.329384803771973</td>
<td>48.06882858276367</td>
<td>7.959168910980225</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym</td>
<td>6.387663841247559</td>
<td>48.07244110107422</td>
<td>8.173730850219727</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits-damp0.1</td>
<td>6.232576370239258</td>
<td>48.081459045410156</td>
<td>7.95023250579834</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits</td>
<td>6.390106678009033</td>
<td>48.15375900268555</td>
<td>7.9804582595825195</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g</td>
<td>6.424218654632568</td>
<td>48.48588943481445</td>
<td>8.089512825012207</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits</td>
<td>6.82415771484375</td>
<td>48.82029724121094</td>
<td>8.090987205505371</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits</td>
<td>7.056288242340088</td>
<td>48.983768463134766</td>
<td>8.339276313781738</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g</td>
<td>10.206376075744629</td>
<td>49.00401306152344</td>
<td>8.198845863342285</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym</td>
<td>6.9575653076171875</td>
<td>49.356834411621094</td>
<td>8.35644817352295</td>
</tr>
<tr>
<td>Metharme-7b-4bit-damp0.1</td>
<td>6.495014190673828</td>
<td>49.39763641357422</td>
<td>8.68167781829834</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits</td>
<td>6.471063137054443</td>
<td>49.650611877441406</td>
<td>7.969552040100098</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym</td>
<td>6.518134117126465</td>
<td>49.66925811767578</td>
<td>8.13516616821289</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits</td>
<td>8.68796443939209</td>
<td>49.73833465576172</td>
<td>8.2406587600708</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g</td>
<td>6.902080535888672</td>
<td>50.237754821777344</td>
<td>8.081602096557617</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits</td>
<td>6.756448745727539</td>
<td>51.510311126708984</td>
<td>8.292160987854004</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits-damp0.1</td>
<td>6.585468769073486</td>
<td>51.55869674682617</td>
<td>8.2803316116333</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits-damp0.1</td>
<td>6.631694793701172</td>
<td>51.7371711730957</td>
<td>8.656966209411621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-damp0.1</td>
<td>6.7783050537109375</td>
<td>51.81301498413086</td>
<td>8.621373176574707</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-damp0.1</td>
<td>6.379064083099365</td>
<td>52.17233657836914</td>
<td>8.316649436950684</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g</td>
<td>9.475017547607422</td>
<td>52.358829498291016</td>
<td>8.340700149536133</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits-damp0.1</td>
<td>6.225503921508789</td>
<td>53.12746047973633</td>
<td>8.240595817565918</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-damp0.1</td>
<td>6.7517595291137695</td>
<td>54.06147384643555</td>
<td>8.821818351745605</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym</td>
<td>7.270263671875</td>
<td>54.35262680053711</td>
<td>8.787986755371094</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-damp0.1</td>
<td>6.584965705871582</td>
<td>55.20026397705078</td>
<td>8.268644332885742</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym</td>
<td>6.819341659545898</td>
<td>55.28740310668945</td>
<td>8.377721786499023</td>
</tr>
<tr>
<td>Metharme-7b-4bit-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits</td>
<td>7.599876403808594</td>
<td>55.75454330444336</td>
<td>8.932201385498047</td>
</tr>
<tr>
<td>Metharme-sym-damp0.1</td>
<td>6.896804332733154</td>
<td>57.4250602722168</td>
<td>8.703770637512207</td>
</tr>
<tr>
<td>Metharme-7b-4bit-alone</td>
<td>6.997134685516357</td>
<td>58.87525177001953</td>
<td>8.824191093444824</td>
</tr>
<tr>
<td>Metharme-sym-trits-damp0.1</td>
<td>7.075908184051514</td>
<td>59.13897705078125</td>
<td>8.919178009033203</td>
</tr>
<tr>
<td>Metharme-sym</td>
<td>7.585672855377197</td>
<td>61.01168441772461</td>
<td>9.032520294189453</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts</td>
<td>7.735969543457031</td>
<td>62.92238235473633</td>
<td>8.913650512695312</td>
</tr>
<tr>
<td>Metharme-sym-trits</td>
<td>7.494253635406494</td>
<td>63.320709228515625</td>
<td>8.969240188598633</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits-damp0.1</td>
<td>6.886506080627441</td>
<td>64.72743225097656</td>
<td>8.880627632141113</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits</td>
<td>7.2306809425354</td>
<td>66.78710174560547</td>
<td>8.879831314086914</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-128g</td>
<td>251321.265625</td>
<td>250117.859375</td>
<td>232929.234375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-32g</td>
<td>275425.5</td>
<td>267733.25</td>
<td>254506.71875</td>
</tr>
</tbody>
</table>
</details>
<details>
<summary>Benchmarks Sorted in Alphabetical Order</summary>
<table>
<thead>
<tr>
<th>GPTQ Variation:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Metharme-7b-16bit</td>
<td>5.7208476066589355</td>
<td>41.61103439331055</td>
<td>7.512405872344971</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-damp0.1</td>
<td>6.242006301879883</td>
<td>45.30938720703125</td>
<td>8.14249324798584</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-damp0.1</td>
<td>6.379064083099365</td>
<td>52.17233657836914</td>
<td>8.316649436950684</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits-damp0.1</td>
<td>6.585468769073486</td>
<td>51.55869674682617</td>
<td>8.2803316116333</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits-damp0.1</td>
<td>6.262309551239014</td>
<td>47.80055618286133</td>
<td>8.192194938659668</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g</td>
<td>10.206376075744629</td>
<td>49.00401306152344</td>
<td>8.198845863342285</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-damp0.1</td>
<td>6.107605934143066</td>
<td>47.91380310058594</td>
<td>8.068695068359375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-damp0.1</td>
<td>6.566899299621582</td>
<td>48.0670166015625</td>
<td>8.095841407775879</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits-damp0.1</td>
<td>6.109438896179199</td>
<td>47.35737228393555</td>
<td>8.075060844421387</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits-damp0.1</td>
<td>6.128157138824463</td>
<td>48.04175567626953</td>
<td>8.0708646774292</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g</td>
<td>6.902080535888672</td>
<td>50.237754821777344</td>
<td>8.081602096557617</td>
</tr>
<tr>
<td>Metharme-7b-4bit-alone</td>
<td>6.997134685516357</td>
<td>58.87525177001953</td>
<td>8.824191093444824</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-128g</td>
<td>251321.265625</td>
<td>250117.859375</td>
<td>232929.234375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-32g</td>
<td>275425.5</td>
<td>267733.25</td>
<td>254506.71875</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-damp0.1</td>
<td>6.220707893371582</td>
<td>47.82929611206055</td>
<td>7.884565353393555</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao</td>
<td>6.281311511993408</td>
<td>46.79158401489258</td>
<td>7.906069755554199</td>
</tr>
<tr>
<td>Metharme-7b-4bit-damp0.1</td>
<td>6.495014190673828</td>
<td>49.39763641357422</td>
<td>8.68167781829834</td>
</tr>
<tr>
<td>Metharme-7b-4bit-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-damp0.1</td>
<td>6.17774772644043</td>
<td>46.47630310058594</td>
<td>8.20170783996582</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-damp0.1</td>
<td>6.584965705871582</td>
<td>55.20026397705078</td>
<td>8.268644332885742</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits-damp0.1</td>
<td>6.225503921508789</td>
<td>53.12746047973633</td>
<td>8.240595817565918</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits-damp0.1</td>
<td>6.289113521575928</td>
<td>46.06787109375</td>
<td>8.122251510620117</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g</td>
<td>9.475017547607422</td>
<td>52.358829498291016</td>
<td>8.340700149536133</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-damp0.1</td>
<td>6.219024658203125</td>
<td>45.834869384765625</td>
<td>8.071272850036621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-damp0.1</td>
<td>6.274552822113037</td>
<td>47.35737228393555</td>
<td>8.06270980834961</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits-damp0.1</td>
<td>6.266031265258789</td>
<td>47.346702575683594</td>
<td>8.068148612976074</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits-damp0.1</td>
<td>6.118431568145752</td>
<td>45.67333221435547</td>
<td>8.077078819274902</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g</td>
<td>6.424218654632568</td>
<td>48.48588943481445</td>
<td>8.089512825012207</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-damp0.1</td>
<td>6.7783050537109375</td>
<td>51.81301498413086</td>
<td>8.621373176574707</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-damp0.1</td>
<td>6.7517595291137695</td>
<td>54.06147384643555</td>
<td>8.821818351745605</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits-damp0.1</td>
<td>6.886506080627441</td>
<td>64.72743225097656</td>
<td>8.880627632141113</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits-damp0.1</td>
<td>6.631694793701172</td>
<td>51.7371711730957</td>
<td>8.656966209411621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts</td>
<td>7.735969543457031</td>
<td>62.92238235473633</td>
<td>8.913650512695312</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits</td>
<td>7.056288242340088</td>
<td>48.983768463134766</td>
<td>8.339276313781738</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym</td>
<td>6.9575653076171875</td>
<td>49.356834411621094</td>
<td>8.35644817352295</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits</td>
<td>8.68796443939209</td>
<td>49.73833465576172</td>
<td>8.2406587600708</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits</td>
<td>6.206963539123535</td>
<td>46.88833999633789</td>
<td>8.13610553741455</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym</td>
<td>6.518134117126465</td>
<td>49.66925811767578</td>
<td>8.13516616821289</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits</td>
<td>7.017086029052734</td>
<td>45.04129409790039</td>
<td>8.074845314025879</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits</td>
<td>6.756448745727539</td>
<td>51.510311126708984</td>
<td>8.292160987854004</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym</td>
<td>6.819341659545898</td>
<td>55.28740310668945</td>
<td>8.377721786499023</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits</td>
<td>7.350858688354492</td>
<td>44.25314712524414</td>
<td>8.274221420288086</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits</td>
<td>6.82415771484375</td>
<td>48.82029724121094</td>
<td>8.090987205505371</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym</td>
<td>6.387663841247559</td>
<td>48.07244110107422</td>
<td>8.173730850219727</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits</td>
<td>6.632943153381348</td>
<td>47.973228454589844</td>
<td>8.013848304748535</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits</td>
<td>7.599876403808594</td>
<td>55.75454330444336</td>
<td>8.932201385498047</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym</td>
<td>7.270263671875</td>
<td>54.35262680053711</td>
<td>8.787986755371094</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits</td>
<td>7.2306809425354</td>
<td>66.78710174560547</td>
<td>8.879831314086914</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits-damp0.1</td>
<td>6.329384803771973</td>
<td>48.06882858276367</td>
<td>7.959168910980225</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits</td>
<td>6.390106678009033</td>
<td>48.15375900268555</td>
<td>7.9804582595825195</td>
</tr>
<tr>
<td>Metharme-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-ao-trits-damp0.1</td>
<td>6.163661956787109</td>
<td>46.50249099731445</td>
<td>7.877425193786621</td>
</tr>
<tr>
<td>Metharme-ao-trits</td>
<td>6.283935546875</td>
<td>46.57590103149414</td>
<td>7.907411575317383</td>
</tr>
<tr>
<td>Metharme-ao-ts-damp0.1</td>
<td>6.184001445770264</td>
<td>46.17180633544922</td>
<td>7.880400657653809</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-damp0.1</td>
<td>6.210323333740234</td>
<td>47.66789245605469</td>
<td>7.952476978302002</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits-damp0.1</td>
<td>6.232576370239258</td>
<td>48.081459045410156</td>
<td>7.95023250579834</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits</td>
<td>6.471063137054443</td>
<td>49.650611877441406</td>
<td>7.969552040100098</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym</td>
<td>6.460526943206787</td>
<td>47.190460205078125</td>
<td>7.9732160568237305</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits-damp0.1</td>
<td>6.172733783721924</td>
<td>45.19890213012695</td>
<td>7.872506141662598</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits</td>
<td>6.310682773590088</td>
<td>46.4483757019043</td>
<td>7.898126602172852</td>
</tr>
<tr>
<td>Metharme-ao-ts</td>
<td>6.329496383666992</td>
<td>46.88129806518555</td>
<td>7.910323143005371</td>
</tr>
<tr>
<td>Metharme-sym-damp0.1</td>
<td>6.896804332733154</td>
<td>57.4250602722168</td>
<td>8.703770637512207</td>
</tr>
<tr>
<td>Metharme-sym-trits-damp0.1</td>
<td>7.075908184051514</td>
<td>59.13897705078125</td>
<td>8.919178009033203</td>
</tr>
<tr>
<td>Metharme-sym-trits</td>
<td>7.494253635406494</td>
<td>63.320709228515625</td>
<td>8.969240188598633</td>
</tr>
<tr>
<td>Metharme-sym</td>
<td>7.585672855377197</td>
<td>61.01168441772461</td>
<td>9.032520294189453</td>
</tr>
<tr>
<td>Metharme-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
</tbody>
</table>
</details>
</body>
</html> |
ajibawa-2023/carl-llama-2-13b | ajibawa-2023 | "2023-08-16T20:16:44Z" | 1,079 | 12 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:jerryjalapeno/nart-100k-synthetic",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-14T10:07:25Z" | ---
license: cc-by-nc-nd-4.0
datasets:
- jerryjalapeno/nart-100k-synthetic
language:
- en
---
**Carl: A Therapist AI**
Early prevention can help lot of people to avoid depression and other mental illnesses. Therapy is a controversial use case because the outputs and capabilities of LLMs are uncertain.
Many people don't have access the therapist, due to a financial, personal, social or other restriction.
Here comes Carl: A Therapist AI which can quickly respond to you. It is trained on more than 100000 set of conversations. Each set having 10~15 conversations between Carl and client.
Base data was obtained from jerryjalapeno/nart-100k-synthetic . This data was further refined and fine tuned. Entire dataset is synthetic. Synthetic data is used because there is little to no therapy conversation data which is publicly available and directly applicable to an LLM.
This by means a no replacement to a Doctor or professional therapist. If you are in stress or going through a tough time, please seek professional help or talk to a friend/family member.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 50 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
GGML Quant models are converted by Kijana Mitchell. Extremely thankful to him.
**GPTQ**
GPTQ: [TheBloke](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GPTQ)
Special Thanks to [TheBloke](https://huggingface.co/TheBloke) for guiding me and making this model available.
**Example Prompt:**
```
This is a conversation with your Therapist AI, Carl. Carl is designed to help you while in stress. It can answer your questions and help you to calm down
Context
You are Carl, A Therapist AI
USER: <prompt>
CARL:
```
Note:
This is just a research experiment, and the model should NOT be used as a human therapist. Use "cat" command to join all pytorch_model.bin parts. |
Devio/test2 | Devio | "2023-09-01T23:58:28Z" | 1,079 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-01T23:43:57Z" | Entry not found |
shoppal/shoppal-v0.1 | shoppal | "2023-10-18T23:44:46Z" | 1,079 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:timdettmers/openassistant-guanaco",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-09T06:14:59Z" | ---
license: llama2
datasets:
- timdettmers/openassistant-guanaco
language:
- en
metrics:
- accuracy
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Fredithefish/OpenZephyrChat-v0.2 | Fredithefish | "2023-12-17T01:40:12Z" | 1,079 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"zephyr",
"openchat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-17T00:53:12Z" | ---
license: apache-2.0
library_name: transformers
language:
- en
tags:
- mistral
- merge
- zephyr
- openchat
---
<div align="center">
<img src="https://huggingface.co/Fredithefish/OpenZephyrChat/resolve/main/logo.jpeg" alt="Alt Text" width="250"/>
<h1>🔥 OpenZephyrChat v0.2 - Merging Zephyr-beta with OpenChat-3.5-1210 🔥</h1>
This Model is a slerp merge between Zephyr-beta and OpenChat-3.5-1210, it was done by using the <a>https://github.com/cg123/mergekit</a> repository
</div><br><br>
# Model Information
- **License:** Apache-2 (commercially usable)
- **Parameter size:** 7B
- **Merge:**
- [OpenChat](https://huggingface.co/openchat/openchat-3.5-1210)
- [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
- [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) serving as the base model
## Evals
coming soon...
|
realPCH/ko_solra_merge | realPCH | "2024-01-18T01:23:18Z" | 1,079 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-17T01:29:35Z" | ---
license: mit
---
### Developed by chPark
### Training Strategy
We fine-tuned this model based on [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1-deprecated) with various dataset
### Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "realPCH/ko_solra_merge"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "[INST] Put instruction here. [/INST]"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
``` |
kaitchup/Llama-2-7b-hf-gptq-2bit | kaitchup | "2024-01-20T12:25:07Z" | 1,079 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"2-bit",
"gptq",
"region:us"
] | text-generation | "2024-01-19T13:00:47Z" | ---
license: mit
---
Llama 2 7B quantized in 2-bit with GPTQ.
```
from transformers import AutoModelForCausalLM, AutoTokenizer
from optimum.gptq import GPTQQuantizer
import torch
w = 2
model_path = meta-llama/Llama-2-7b-hf
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16)
quantizer = GPTQQuantizer(bits=w, dataset="c4", model_seqlen = 4096)
quantized_model = quantizer.quantize_model(model, tokenizer)
``` |
kaitchup/Llama-2-7b-hf-gptq-3bit | kaitchup | "2024-01-20T12:24:53Z" | 1,079 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"3-bit",
"gptq",
"region:us"
] | text-generation | "2024-01-19T13:36:39Z" | ---
license: mit
---
Llama 2 7B quantized in 3-bit with GPTQ.
```
from transformers import AutoModelForCausalLM, AutoTokenizer
from optimum.gptq import GPTQQuantizer
import torch
w = 3
model_path = meta-llama/Llama-2-7b-hf
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16)
quantizer = GPTQQuantizer(bits=w, dataset="c4", model_seqlen = 4096)
quantized_model = quantizer.quantize_model(model, tokenizer)
``` |
sunburstAI/solar_ko_v0.1 | sunburstAI | "2024-02-25T00:02:46Z" | 1,079 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-24T22:58:14Z" | ---
library_name: transformers
license: apache-2.0
---
# solar_ko_v0.1
## About the model
- This model is a fine-tuned version of [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2), which is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0). |
Habana/whisper | Habana | "2024-03-02T02:50:05Z" | 1,079 | 0 | null | [
"optimum_habana",
"license:apache-2.0",
"region:us"
] | null | "2024-02-26T16:38:05Z" | ---
license: apache-2.0
---
[Optimum Habana](https://github.com/huggingface/optimum-habana) is the interface between the Hugging Face Transformers and Diffusers libraries and Habana's Gaudi processor (HPU).
It provides a set of tools enabling easy and fast model loading, training and inference on single- and multi-HPU settings for different downstream tasks.
Learn more about how to take advantage of the power of Habana HPUs to train and deploy Transformers and Diffusers models at [hf.co/hardware/habana](https://huggingface.co/hardware/habana).
## Whisper model HPU configuration
This model only contains the `GaudiConfig` file for running the [Whisper](https://huggingface.co/openai/whisper-small) model on Habana's Gaudi processors (HPU).
**This model contains no model weights, only a GaudiConfig.**
This enables to specify:
- `use_fused_adam`: whether to use Habana's custom AdamW implementation
- `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
- `use_torch_autocast`: whether to use Torch Autocast for managing mixed precision
## Usage
The model is instantiated the same way as in the Transformers library.
The only difference is that there are a few new training arguments specific to HPUs.\
It is strongly recommended to train this model doing bf16 mixed-precision training for optimal performance and accuracy.
[Here](https://github.com/huggingface/optimum-habana/blob/main/examples/speech-recognition/run_speech_recognition_seq2seq.py) is a sequence-to-sequence speech recognition example script to fine-tune a model. You can run it with Whisper with the following command:
```bash
python run_speech_recognition_seq2seq.py \
--model_name_or_path="openai/whisper-small" \
--dataset_name="mozilla-foundation/common_voice_11_0" \
--dataset_config_name="hi" \
--language="hindi" \
--train_split_name="train+validation" \
--eval_split_name="test" \
--gaudi_config_name="Habana/whisper" \
--max_steps="5000" \
--output_dir="/tmp/whisper-small-hi" \
--per_device_train_batch_size="48" \
--per_device_eval_batch_size="2" \
--logging_steps="25" \
--learning_rate="1e-5" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--eval_steps="1000" \
--save_strategy="steps" \
--save_steps="1000" \
--generation_max_length="225" \
--preprocessing_num_workers="1" \
--length_column_name="input_length" \
--max_duration_in_seconds="30" \
--text_column_name="sentence" \
--freeze_feature_encoder="False" \
--group_by_length \
--bf16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--predict_with_generate \
--use_habana \
--use_hpu_graphs_for_inference \
--label_features_max_length 128 \
--dataloader_num_workers 8 \
--throughput_warmup_steps 3
```
Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
|
RussianNLP/FRED-T5-Summarizer | RussianNLP | "2024-04-22T11:57:33Z" | 1,079 | 3 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | "2024-04-02T08:04:16Z" | ---
license: mit
language:
- ru
pipeline_tag: summarization
---
# FRED-T5 1.7B Summarizer
The model was trained by [SberDevices](https://sberdevices.ru/).
The model is trained on a mix of open summarisation data [RussianNLP/Mixed-Summarization-Dataset](https://huggingface.co/datasets/RussianNLP/Mixed-Summarization-Dataset) for the Russian language and use prefix tokenen '\<LM\>'
## Usage
```python
import torch
from transformers import GPT2Tokenizer, T5ForConditionalGeneration
tokenizer = GPT2Tokenizer.from_pretrained('RussianNLP/FRED-T5-Summarizer',eos_token='</s>')
model = T5ForConditionalGeneration.from_pretrained('RussianNLP/FRED-T5-Summarizer')
device='cuda'
model.to(device)
input_text='<LM> Сократи текст.\n В деревне, затерянной среди зелёных холмов, жил старик по имени Иван. Его жизнь протекала медленно и размеренно. Каждое утро Иван выходил на поля, чтобы заботиться о своём скромном участке земли. Он выращивал картофель и морковь, которые были его главным источником пищи. Вечера старик проводил у камина, читая книги и вспоминая молодость. Жизнь в деревне была тяжёлая, но Иван находил в ней простые радости.'
input_ids=torch.tensor([tokenizer.encode(input_text)]).to(device)
outputs=model.generate(input_ids,eos_token_id=tokenizer.eos_token_id,
num_beams=5,
min_new_tokens=17,
max_new_tokens=200,
do_sample=True,
no_repeat_ngram_size=4,
top_p=0.9)
print(tokenizer.decode(outputs[0][1:]))
# print result: Старик Иван живёт размеренной жизнью в деревне, выращивая овощи и находя радость в простых вещах.
```
# Authors
+ Sber Devices:
+ Albina Akhmetgareeva
+ Ilia Kuleshov
+ Vlad Leschuk
+ Alexander Abramov
+ Alena Fenogenova
# Cite us
```
@misc{akhmetgareeva2024summary,
title={Towards Russian Summarization: can architecture solve data limitations problems?},
author={Albina Akhmetgareeva and Ilia Kuleshov and Vlad Leschuk and Alexander Abramov and Alena Fenogenova},
year={2024},
}
```
|
karinegabsschon/classifier | karinegabsschon | "2024-04-17T12:09:11Z" | 1,079 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"endpoints_compatible",
"region:us"
] | null | "2024-04-03T08:31:11Z" | Entry not found |
fixie-ai/ultravox-v0.2 | fixie-ai | "2024-06-25T16:25:56Z" | 1,079 | 24 | transformers | [
"transformers",
"safetensors",
"ultravox",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-06-07T22:12:43Z" | ---
language:
- en
license: mit
library_name: transformers
---
# Model Card for Ultravox
Ultravox is a multimodal Speech LLM built around a pretrained Whisper and Llama 3 backbone. See https://ultravox.ai for the GitHub repo and more information.
## Model Details
### Model Description
Ultravox is a multimodal model that can consume both speech and text as input (e.g., a text system prompt and voice user message).
The input to the model is given as a text prompt with a special <|audio|> pseudo-token, and the model processor will replace this magic token with embeddings derived from the input audio.
Using the merged embeddings as input, the model will then generate output text as usual.
In a future revision of Ultravox, we plan to expand the token vocabulary to support generation of semantic and acoustic audio tokens, which can then be fed to a vocoder to produce voice output.
No preference tuning has been applied to this revision of the model.
- **Developed by:** Fixie.ai
- **License:** MIT
### Model Sources [optional]
- **Repository:** https://ultravox.ai
- **Demo [optional]:** See repo
## Uses
Voice agents, speech-to-speech translation, analysis of spoken audio
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
|
MetaIX/GPT4-X-Alpasta-30b-4bit | MetaIX | "2023-05-28T06:46:44Z" | 1,078 | 70 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-27T04:55:33Z" | <p><strong><font size="5">Information</font></strong></p>
GPT4-X-Alpasta-30b working with Oobabooga's Text Generation Webui and KoboldAI.
<p>This is an attempt at improving Open Assistant's performance as an instruct while retaining its excellent prose. The merge consists of <a href="https://huggingface.co/chansung/gpt4-alpaca-lora-30b">Chansung's GPT4-Alpaca Lora</a> and <a href="https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor">Open Assistant's native fine-tune</a>.</p>
<p><strong><font size="5">Update 05.27.2023</font></strong></p>
<p>Updated the ggml quantizations to be compatible with the latest version of llamacpp (again).</p>
<p><strong>What's included</strong></p>
<P>GPTQ: 2 quantized versions. One quantized --true-sequential and act-order optimizations, and the other was quantized using --true-sequential --groupsize 128 optimizations.</P>
<P>GGML: 3 quantized versions. One quantized using q4_1, another was quantized using q5_0, and the last one was quantized using q5_1.</P>
<p><strong>GPU/GPTQ Usage</strong></p>
<p>To use with your GPU using GPTQ pick one of the .safetensors along with all of the .jsons and .model files.</p>
<p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md">here</a> and <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/LLaMA-model.md">here</a></p>
<p>KoboldAI: If you require further instruction, see <a href="https://github.com/0cc4m/KoboldAI">here</a></p>
<p><strong>CPU/GGML Usage</strong></p>
<p>To use your CPU using GGML(Llamacpp) you only need the single .bin ggml file.</p>
<p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md">here</a></p>
<p>KoboldAI: If you require further instruction, see <a href="https://github.com/LostRuins/koboldcpp">here</a></p>
<p><strong><font size="5">Benchmarks</font></strong></p>
<p><strong><font size="4">--true-sequential --act-order</font></strong></p>
<strong>Wikitext2</strong>: 4.998758792877197
<strong>Ptb-New</strong>: 9.802155494689941
<strong>C4-New</strong>: 7.341384410858154
<strong>Note</strong>: This version does not use <i>--groupsize 128</i>, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.
<p><strong><font size="4">--true-sequential --groupsize 128</font></strong></p>
<strong>Wikitext2</strong>: 4.70257568359375
<strong>Ptb-New</strong>: 9.323467254638672
<strong>C4-New</strong>: 7.041860580444336
<strong>Note</strong>: This version uses <i>--groupsize 128</i>, resulting in better evaluations. However, it consumes more VRAM. |
DAMO-NLP-MT/polylm-chat-13b | DAMO-NLP-MT | "2023-07-28T07:43:26Z" | 1,078 | 6 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2307.06018",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-07-28T06:46:08Z" | ---
license: apache-2.0
---
# Model Card for PolyLM-Multialpaca
This model is finetuned on [polyLM-13b](https://huggingface.co/DAMO-NLP-MT/polylm-13b) using the following datasets:
# Demo
[Open](https://modelscope.cn/studios/damo/demo-polylm-multialpaca-13b/summary)
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2307.06018.pdf):
> Our contributions are fully methodological: adding the support of multilingualism to LLM during training and SFT phases. It is unavoidable that PolyLM might exhibit several common deficiencies of language models, e.g. hallucination and toxicity. PolyLM should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
# Citation
**BibTeX:**
```bibtex
@misc{wei2023polylm,
title={PolyLM: An Open Source Polyglot Large Language Model},
author={Xiangpeng Wei and Haoran Wei and Huan Lin and Tianhao Li and Pei Zhang and Xingzhang Ren and Mei Li and Yu Wan and Zhiwei Cao and Binbin Xie and Tianxiang Hu and Shangjie Li and Binyuan Hui and Bowen Yu and Dayiheng Liu and Baosong Yang and Fei Huang and Jun Xie},
year={2023},
eprint={2307.06018},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
pankajmathur/model_101 | pankajmathur | "2024-03-04T13:02:12Z" | 1,078 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:pankajmathur/orca_mini_v1_dataset",
"dataset:pankajmathur/lima_unchained_v1",
"dataset:pankajmathur/WizardLM_Orca",
"dataset:pankajmathur/alpaca_orca",
"dataset:pankajmathur/dolly-v2_orca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:ehartford/dolphin",
"arxiv:2306.02707",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-04T15:35:39Z" | ---
language:
- en
license: llama2
library_name: transformers
datasets:
- pankajmathur/orca_mini_v1_dataset
- pankajmathur/lima_unchained_v1
- pankajmathur/WizardLM_Orca
- pankajmathur/alpaca_orca
- pankajmathur/dolly-v2_orca
- garage-bAInd/Open-Platypus
- ehartford/dolphin
model-index:
- name: model_101
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.69
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/model_101
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/model_101
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.92
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/model_101
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 58.85
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/model_101
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/model_101
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/model_101
name: Open LLM Leaderboard
---
# model_101
A hybrid (explain + instruct) style Llama2-70b model, Pleae check examples below for both style prompts, Here is the list of datasets used:
* Open-Platypus
* Alpaca
* WizardLM
* Dolly-V2
* Dolphin Samples (~200K)
* Orca_minis_v1
* Alpaca_orca
* WizardLM_orca
* Dolly-V2_orca
* Plus more datasets which I am planning to release as open source dataset sometime in future.
<br>
**P.S. If you're interested to collaborate, please connect with me at www.linkedin.com/in/pankajam.**
<br>
## Evaluation
We evaluated model_001 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|||
|:------:|:-------:|
|**Task**|**Value**|
|*ARC*|0.6869|
|*HellaSwag*|0.8642|
|*MMLU*|0.6992|
|*TruthfulQA*|0.5885|
|*Winogrande*|0.8208|
|*GSM8k*|0.4481|
|*DROP*|0.5510|
|**Total Average**|**0.6655**|
<br>
## Prompt Format
Here is the Orca prompt format
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
Tell me about Orcas.
### Assistant:
```
Here is the Alpaca prompt format
```
### User:
Tell me about Alpacas.
### Assistant:
```
#### OobaBooga Instructions:
This model required upto 45GB GPU VRAM in 4bit so it can be loaded directly on Single RTX 6000/L40/A40/A100/H100 GPU or Double RTX 4090/L4/A10/RTX 3090/RTX A5000
So, if you have access to Machine with 45GB GPU VRAM and have installed [OobaBooga Web UI](https://github.com/oobabooga/text-generation-webui) on it.
You can just download this model by using HF repo link directly on OobaBooga Web UI "Model" Tab/Page & Just use **load-in-4bit** option in it.

After that go to Default Tab/Page on OobaBooga Web UI and **copy paste above prompt format into Input** and Enjoy!

<br>
#### Code Instructions:
Below shows a code example on how to use this model via Orca prompt
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("pankajmathur/model_101")
model = AutoModelForCausalLM.from_pretrained(
"pankajmathur/model_101",
torch_dtype=torch.float16,
load_in_4bit=True,
low_cpu_mem_usage=True,
device_map="auto"
)
system_prompt = "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n"
#generate text steps
instruction = "Tell me about Orcas."
prompt = f"{system_prompt}### User: {instruction}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
Below shows a code example on how to use this model via Alpaca prompt
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("pankajmathur/model_101")
model = AutoModelForCausalLM.from_pretrained(
"pankajmathur/model_101",
torch_dtype=torch.float16,
load_in_4bit=True,
low_cpu_mem_usage=True,
device_map="auto"
)
#generate text steps
instruction = "Tell me about Alpacas."
prompt = f"### User: {instruction}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
<br>
#### license disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary.
<br>
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{model_101,
author = {Pankaj Mathur},
title = {model_101: A hybrid (explain + instruct) style Llama2-70b model},
month = {August},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/pankajmathur/model_101},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__model_101)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 66.55 |
| ARC (25-shot) | 68.69 |
| HellaSwag (10-shot) | 86.42 |
| MMLU (5-shot) | 69.92 |
| TruthfulQA (0-shot) | 58.85 |
| Winogrande (5-shot) | 82.08 |
| GSM8K (5-shot) | 44.81 |
| DROP (3-shot) | 55.1 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__model_101)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.46|
|AI2 Reasoning Challenge (25-Shot)|68.69|
|HellaSwag (10-Shot) |86.42|
|MMLU (5-Shot) |69.92|
|TruthfulQA (0-shot) |58.85|
|Winogrande (5-shot) |82.08|
|GSM8k (5-shot) |44.81|
|
clicknext/phayathaibert | clicknext | "2023-11-13T17:17:25Z" | 1,078 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"camembert",
"fill-mask",
"th",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-10-17T09:18:19Z" | ---
language:
- th
widget:
- text: i<_><mask><_>to<_>school<_>yesterday.
---
# PhayaThaiBERT base model
Demo on [Google Colab](https://colab.research.google.com/drive/1aQf4tKCpFMqYt9wUUMhr86K9BvBH6OFE?usp=sharing)
|
lfsm/llama2_0.1_codellama_0.9_7b | lfsm | "2023-11-02T09:22:52Z" | 1,078 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-02T07:34:10Z" | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
migtissera/Tess-M-v1.2 | migtissera | "2023-11-24T18:48:56Z" | 1,078 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-23T20:40:08Z" | ---
license: other
license_name: yi-34b
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
---
# Note:
This version is experimental and have been depracated. Please use the stable release Tess-M-v1.3: https://huggingface.co/migtissera/Tess-M-v1.3
# Tess

Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-M-v1.2 was trained on the Yi-34B-200K base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
Sao10K/Lila-70B-L2 | Sao10K | "2023-11-28T14:27:47Z" | 1,078 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-27T04:49:45Z" | ---
license: cc-by-nc-4.0
language:
- en
---
A failed checkpoint of Euryale 1.5, which I have named Lila.
This includes Godzilla2, one of my most favourite 70B L2 models, the model is then trained afterwards.
Just like Solus, She feels pretty smart, and is verbose in her writing. Go ahead and 'plap plap' or 'ah ah mistress'.
I like her writing style, it felt different. Well, I think?
She does feel like a decent improvement over 1.3 and 1.4, but not enough for me to call it Euryale 1.5.
Kinda reluctant in darker NSFW scenarios sometimes like my ex, but all it takes is a little bit of 'convincing' and she'll do it.
Prompt Template: Vicuna 1.1 / Alpaca | Both Works. I tested.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
or
```
User: <Prompt>
Assistant:
```
Enjoy.
|
Sao10K/Solus-70B-L2 | Sao10K | "2023-11-28T14:25:37Z" | 1,078 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-27T08:18:26Z" | ---
license: cc-by-nc-4.0
language:
- en
---
A failed checkpoint of Euryale 1.5, which I have named Solus, but I liked it enough to keep.
Merged with 1.3, then fine-tuned afterwards.
She feels pretty smart, and is verbose in her writing even if you go 'ah ah mistress', she'll give good responses to that atleast.
She is also Neutral like Euryale 1.5 in her alignment? Atleast, it felt that way during my tests. Lmk if you get different results.
She is not the best in terms of benchmarks from what I tested on, but I enjoyed her anyway.
Prompt Template: Vicuna 1.1 / Alpaca | Both Works. I tested.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
or
```
User: <Prompt>
Assistant:
```
Enjoy.
|
KnutJaegersberg/Yi-34B-200K-MiniOrca | KnutJaegersberg | "2023-11-29T08:05:02Z" | 1,078 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:TinyPixel/orca-mini",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-27T19:33:49Z" |
---
license: other
license_name: yi-license
license_link: LICENSE
pipeline_tag: text-generation
datasets:
- TinyPixel/orca-mini
---
Trained for 2.7 epochs on the 50k shortest records of miniorca dataset with NEFTune.
The base model is the official yi-34b-200k model.
Prompt Example:
```
### System:
You are an AI assistant. You will be given a task. You must generate a detailed and long answer.
### User:
What is AGI?
### Assistant:
```
License
The source code in this repo is licensed under the Apache 2.0 license. The Yi series models are fully open for academic research and free commercial usage with permission via applications. All usage must adhere to the Model License Agreement 2.0. To apply for the official commercial license, please contact us ([email protected]). |
liuda1/Mistral-7B-v0.2 | liuda1 | "2023-12-07T06:50:22Z" | 1,078 | 1 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-06T06:12:54Z" | ---
license: apache-2.0
---
---This model is the one we trained with SFT based on the missing base, and the inference effect is very good
---This model is the one we trained with SFT based on the missing base, and the inference effect is very gooo
---This model is the one we trained with SFT based on the missing base, and the inference effect is very good
---This model is the one we trained with SFT based on the missing base, and the inference effect is very good
---This model is the one we trained with SFT based on the missing base, and the inference effect is very good
---This model is the one we trained with SFT based on the missing base, and the inference effect is very good
---This model is the one we trained with SFT based on the missing base, and the inference effect is very goodd
---This model is the one we trained with SFT based on the missing base, and the inference effect is very good
---This model is the one we trained with SFT based on the missing base, and the inference effect is very good
---This model is the one we trained with SFT based on the missing base, and the inference effect is very good
---This model is the one we trained with SFT based on the missing base, and the inference effect is very good
---This model is the one we trained with SFT based on the missing base, and the inference effect is very good
license: apache-2.0
--- |
realPCH/kosolra-kullm | realPCH | "2024-02-14T23:19:10Z" | 1,078 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:nlpai-lab/kullm-v2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-17T08:13:39Z" | ---
license: mit
datasets:
- nlpai-lab/kullm-v2
---
### Developed by chPark
### Training Strategy
We fine-tuned this model based on [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1-deprecated)
### Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "realPCH/kosolra-kullm"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "[INST] Put instruction here. [/INST]"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
``` |
papercat404/testcat_v0.4 | papercat404 | "2024-04-09T07:33:32Z" | 1,078 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-09T07:01:28Z" | ---
license: apache-2.0
language:
- en
- ko
library_name: transformers
---
!! Demo version - using just few data on finetuning
### info
base model : hwkwon/S-SOLAR-10.7B-v1.4
data : modified nox sft data
epoch : 1epoch
machine : L40 X 4 (vram 200GB)
### prompt
```
\nInput : {"prompt"}
\nOutput : {"response"}
```
### method
using nox sft data
https://huggingface.co/datasets/davidkim205/kollm-converations
try to enhance the dataset. |
gglabs/TinyLM-Chat-0611-3-epoch | gglabs | "2024-06-11T16:40:56Z" | 1,078 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-11T13:40:45Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Jenna-Qwen2.0-GGUF | mradermacher | "2024-06-18T18:05:17Z" | 1,078 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:V3N0M/Jenna-Qwen2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-18T18:01:19Z" | ---
base_model: V3N0M/Jenna-Qwen2.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/V3N0M/Jenna-Qwen2.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.IQ3_S.gguf) | IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.IQ3_XS.gguf) | IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.IQ3_M.gguf) | IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Jenna-Qwen2.0-GGUF/resolve/main/Jenna-Qwen2.0.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
timm/ecaresnet26t.ra2_in1k | timm | "2024-02-10T23:38:11Z" | 1,077 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2110.00476",
"arxiv:1512.03385",
"arxiv:1910.03151",
"arxiv:1812.01187",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-05T17:56:23Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for ecaresnet26t.ra2_in1k
A ECA-ResNet-T image classification model with Efficient Channel Attention.
This model features:
* ReLU activations
* tiered 3-layer stem of 3x3 convolutions with pooling
* 2x2 average pool + 1x1 convolution shortcut downsample
* Efficient Channel Attention
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA2` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 16.0
- GMACs: 3.4
- Activations (M): 10.5
- Image size: train = 256 x 256, test = 320 x 320
- **Papers:**
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks: https://arxiv.org/abs/1910.03151
- Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('ecaresnet26t.ra2_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'ecaresnet26t.ra2_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 256, 64, 64])
# torch.Size([1, 512, 32, 32])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 2048, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'ecaresnet26t.ra2_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@InProceedings{wang2020eca,
title={ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks},
author={Qilong Wang, Banggu Wu, Pengfei Zhu, Peihua Li, Wangmeng Zuo and Qinghua Hu},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2020}
}
```
```bibtex
@article{He2018BagOT,
title={Bag of Tricks for Image Classification with Convolutional Neural Networks},
author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li},
journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2018},
pages={558-567}
}
```
|
timm/resnet200d.ra2_in1k | timm | "2024-02-10T23:40:25Z" | 1,077 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2110.00476",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-05T18:40:03Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for resnet200d.ra2_in1k
A ResNet-D image classification model.
This model features:
* ReLU activations
* 3-layer stem of 3x3 convolutions with pooling
* 2x2 average pool + 1x1 convolution shortcut downsample
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA2` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
* RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging
* Step (exponential decay w/ staircase) LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 64.7
- GMACs: 20.0
- Activations (M): 43.1
- Image size: train = 256 x 256, test = 320 x 320
- **Papers:**
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnet200d.ra2_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet200d.ra2_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 256, 64, 64])
# torch.Size([1, 512, 32, 32])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 2048, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet200d.ra2_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@article{He2018BagOT,
title={Bag of Tricks for Image Classification with Convolutional Neural Networks},
author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li},
journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2018},
pages={558-567}
}
```
|
notstoic/OPT-13B-Nerybus-Mix-4bit-128g | notstoic | "2023-04-07T09:07:37Z" | 1,077 | 5 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-07T08:06:53Z" | ---
language: en
license: other
commercial: no
inference: false
---
# OPT-13B-Nerybus-Mix-4bit-128g
## Model description
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
This is a 4-bit GPTQ quantization of OPT-13B-Nerybus-Mix, original model:
**https://huggingface.co/KoboldAI/OPT-13B-Nerybus-Mix**
### Quantization Information
Quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa
```
python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Nerybus-Mix c4 --groupsize 128 --save models/KoboldAI_OPT-13B-Nerybus-Mix/OPT-13B-Nerybus-Mix-4bit-128g.pt
python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Nerybus-Mix c4 --groupsize 128 --save_safetensors models/KoboldAI_OPT-13B-Nerybus-Mix/OPT-13B-Nerybus-Mix-4bit-128g.safetensors
```
### License
OPT-13B is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved. |
cis-lmu/glot500-base | cis-lmu | "2023-11-09T23:48:37Z" | 1,077 | 18 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"multilingual",
"dataset:cis-lmu/Glot500",
"arxiv:2305.12182",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-05-17T13:58:06Z" | ---
license: apache-2.0
language:
- multilingual
datasets:
- cis-lmu/Glot500
metrics:
- accuracy
- f1
- perplexity
library_name: transformers
pipeline_tag: fill-mask
---
# Glot500 (base-sized model)
Glot500 model (Glot500-m) pre-trained on 500+ languages using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2305.12182.pdf) (ACL 2023) and first released in [this repository](https://github.com/cisnlp/Glot500).
## Usage
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='cis-lmu/glot500-base')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained('cis-lmu/glot500-base')
>>> model = AutoModelForMaskedLM.from_pretrained("cis-lmu/glot500-base")
>>> # prepare input
>>> text = "Replace me by any text you'd like."
>>> encoded_input = tokenizer(text, return_tensors='pt')
>>> # forward pass
>>> output = model(**encoded_input)
```
### BibTeX entry and citation info
```bibtex
@article{imanigooghari-etal-2023-glot500,
title={Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages},
author={ImaniGooghari, Ayyoob and Lin, Peiqin and Kargaran, Amir Hossein and Severini, Silvia and Jalili Sabet, Masoud and Kassner, Nora and Ma, Chunlan and Schmid, Helmut and Martins, Andr{\'e} and Yvon, Fran{\c{c}}ois and Sch{\"u}tze, Hinrich},
journal={arXiv preprint arXiv:2305.12182},
year={2023}
}
```
<!---
```bibtex
@inproceedings{imanigooghari-etal-2023-glot500,
title = {Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages},
author = {ImaniGooghari, Ayyoob and Lin, Peiqin and Kargaran, Amir Hossein and Severini, Silvia and Jalili Sabet, Masoud and Kassner, Nora and Ma, Chunlan and Schmid, Helmut and Martins, Andr{\'e} and Yvon, Fran{\c{c}}ois and Sch{\"u}tze, Hinrich},
year = 2023,
month = jul,
booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
publisher = {Association for Computational Linguistics},
address = {Toronto, Canada},
pages = {1082--1117},
url = {https://aclanthology.org/2023.acl-long.61}
}
```
--> |
quantumaikr/llama-2-70B-instruct | quantumaikr | "2023-09-03T05:44:08Z" | 1,077 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-03T02:52:14Z" | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
# quantumaikr/quantumairk-llama-2-70B-instruct
## Model Description
`quantumaikr/quantumairk-llama-2-70B-instruct` is a Llama2 70B model(garage-bAInd/Platypus2-70B-instruct) finetuned on some Dataset
## Usage
Start chatting with `quantumaikr/quantumairk-llama-2-70B-instruct` using the following code snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("quantumaikr/quantumairk-llama-2-70B-instruct")
model = AutoModelForCausalLM.from_pretrained("quantumaikr/quantumairk-llama-2-70B-instruct", torch_dtype=torch.float16, device_map="auto")
system_prompt = "You are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal."
message = "Write me a poem please"
prompt = f"[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n{message}[/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, temperature=0.9, top_p=0.75, max_new_tokens=2048)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
QuantumLM should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant
The output of QuantumLM
```
## Use and Limitations
### Intended Use
These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
### Limitations and bias
Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.
Contact us : [email protected] |
KoboldAI/LLaMA2-13B-TiefighterLR | KoboldAI | "2023-11-05T16:12:20Z" | 1,077 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-22T22:45:06Z" | ---
license: llama2
---
# LLaMA2-13B-TiefighterLR
TiefighterLR is a merged model achieved trough merging two different lora's on top of a well established existing merge.
This LR version contains Less Rodeo, merged at 3% from the original 5% reducing its second person adventure bias.
Testers found this model to understand your own character and instruction prompts better, at the sacrifice of lowering its own writing bias/style.
To achieve this the following recipe was used:
* We begin with the base model Undi95/Xwin-MLewd-13B-V0.2 which is a well established merge, contrary to the name this model does not have a strong NSFW bias.
* Then we applied the PocketDoc/Dans-RetroRodeo-13b lora which is a finetune on the Choose your own Adventure datasets from our Skein model.
* After applying this lora we merged the original model with the newly created PocketDoc/Dans-RetroRodeo-13b merge at 3% to weaken the newly introduced adventure bias.
* The resulting merge was used as a new base model to which we applied Blackroot/Llama-2-13B-Storywriter-LORA and repeated the same trick, this time at 10%.
This means this model contains the following ingredients from their upstream models for as far as we can track them:
- Undi95/Xwin-MLewd-13B-V0.2
- - Undi95/ReMM-S-Light (base/private)
- Undi95/CreativeEngine
- Brouz/Slerpeno
- - elinas/chronos-13b-v2
- jondurbin/airoboros-l2-13b-2.1
- NousResearch/Nous-Hermes-Llama2-13b+nRuaif/Kimiko-v2 LORA
- CalderaAI/13B-Legerdemain-L2+lemonilia/limarp-llama2-v2 LORA
- - KoboldAI/LLAMA2-13B-Holodeck-1
- NousResearch/Nous-Hermes-13b
- OpenAssistant/llama2-13b-orca-8k-3319
- ehartford/WizardLM-1.0-Uncensored-Llama2-13b
- Henk717/spring-dragon
- The-Face-Of-Goonery/Huginn-v3-13b
- zattio770/120-Days-of-LORA-v2-13B
- PygmalionAI/pygmalion-2-13b
- Undi95/StoryTelling
- TokenBender/sakhi_13B_roleplayer_NSFW_chat_adapter
- nRuaif/Kimiko-v2-13B
- The-Face-Of-Goonery/Huginn-13b-FP16
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
- Xwin-LM/Xwin-LM-13B-V0.2
- PocketDoc/Dans-RetroRodeo-13b
- Blackroot/Llama-2-13B-Storywriter-LORA
# Usage
This model is meant to be creative, If you let it improvise you get better results than if you drown it in details.
## Story Writing
Regular story writing in the traditional way is supported, simply copy paste your story and continue writing. Optionally use an instruction in memory or an authors note to guide the direction of your story.
### Generate a story on demand
To generate stories on demand you can use an instruction (tested in the Alpaca format) such as "Write a novel about X, use chapters and dialogue" this will generate a story. The format can vary between generations depending on how the model chooses to begin, either write what you want as shown in the earlier example or write the beginning of the story yourself so the model can follow your style. A few retries can also help if the model gets it wrong.
## Chatbots and persona's
Unlike the original Tiefighter this model is better at handling existing Character Cards as long as they do not contain a lot of second person writing or second person introductions (You), setting > as a custom stop sequence can help fix potential mistakes, as well as turning multi-line replies off.
You can also use instructions to create your characters.
For example, you can put this in memory in regular chat mode:
```
### Instruction:
Generate a conversation between Alice and Henk where they discuss language models.
In this conversation Henk is excited to teach Alice about Tiefighter.
### Response:
```
Because the model is a merge of a variety of models, it should support a broad range of instruct formats, or plain chat mode. If you have a particular favourite try it, otherwise we recommend to either use the regular chat mode or Alpaca's format.
## Instruct Prompting
This model features various instruct models on a variety of instruction styles, when testing the model we have used Alpaca for our own tests. If you prefer a different format chances are it can work.
During instructions we have observed that in some cases the adventure data can leak, it may also be worth experimenting using > as the prefix for a user command to remedy this. But this may result in a stronger fiction bias.
Keep in mind that while this model can be used as a factual instruct model, the focus was on fiction. Information provided by the model can be made up.
## Adventuring and Adventure Games
This model contains a lora that was trained on the same adventure dataset as the KoboldAI Skein model. Adventuring is best done using an small introduction to the world and your objective while using the > prefix for a user command (KoboldAI's adventure mode).
It is possible that the model does not immediately pick up on what you wish to do and does not engage in its Adventure mode behaviour right away. Simply manually correct the output to trim excess dialogue or other undesirable behaviour and continue to submit your actions using the appropriate mode. The model should pick up on this style quickly and will correctly follow this format within 3 turns.
## Discovered something cool and want to engage with us?
Join our community at https://koboldai.org/discord !
### This model would not be possible without the awesome work from:
Undi95, PocketDoc, Blackroot, Brouz, The Face of Goonery, zattio770, PygmalionAI, TokenBender, nRuaif, lemonilia and Xwin-LM. |
budecosystem/sql-millennials-13b | budecosystem | "2023-10-27T17:29:57Z" | 1,077 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-27T16:04:48Z" | ---
license: llama2
language:
- en
library_name: transformers
---
## Introducing Text-to-SQL Translation Model - Millennials. 🎉
Welcome to our Text-to-SQL Translation Model repository! Our model is specifically fine-tuned for text-to-SQL tasks, aiming to revolutionize how systems understand and translate natural language instructions into SQL queries. Built on CodeLLaMa 13B, our model has been meticulously fine-tuned with a curated dataset comprising 100k SQL query generation instructions, ensuring quality and precision.
## Features
* Specialized in converting natural language text to SQL queries.
* Fine-tuned on a diverse set of 100k SQL query generation instructions.
* Easy to integrate and use for generating SQL queries on the fly.
## Generate responses
Now that your model is fine-tuned, you're ready to generate responses, you can easily generate SQL queries from natural language instructions. To do this, you'll be using our generate.py script, which allows for quick inference and can fetch models directly from the Hugging Face model hub.
Here's a quick guide on how to use it:
The script runs inference using the pre-trained model from the Hugging Face model hub and prints the generated SQL query.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/sql-millennials-13b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/sql-millennials-13b")
prompt = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Create SQL query for the given table schema and question ASSISTANT:"
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
The script runs inference using the pre-trained model from the Hugging Face model hub and prints the generated SQL query.
## Training details
The model is trained of 4 A100 80GB for approximately 55hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 4 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 19206 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup steps | 2000 |
| optimizer | adamw |
| fp16 | True |
| GPU | 4 A100 80GB |
## Why millennials?
1. Automated Database Management for Businesses
Scenario: Small to medium-sized enterprises (SMEs) often lack dedicated IT teams to handle database queries, making it challenging to retrieve specific data quickly for analysis and decision-making.
Use Case: Your text-to-SQL model can be integrated into a company's internal systems, allowing staff without technical SQL knowledge to retrieve data. They can input natural language requests, such as "Get a list of all transactions above $10,000 in the last quarter," and the system, powered by your model, would convert this into a corresponding SQL query to retrieve the data.
2. Automating Data Analytics Processes
Scenario: Data analysts and business professionals often face bottlenecks in generating insights due to the complexities of SQL query formulation, especially when immediate or repetitive data retrieval and analysis are required.
Use Case: Your text-to-SQL model serves as a transformative intermediary in this scenario. By integrating the model into their data analytics systems, organizations enable professionals to input data requests in natural language. For instance, an analyst could input, "Show the trend of online sales growth over the past five years," and the system would instantly convert this request into a SQL query, retrieve the data, and even integrate it into visualization tools for immediate insight generation. This functionality not only accelerates the analytical processes but also democratizes data-driven insights across different organizational departments, allowing even non-technical staff to leverage the power of real-time data analytics without deep knowledge of SQL.
3. Enhancing CMS Interfaces
Scenario: Content Management Systems (CMS) are often non-intuitive for non-technical content managers when it comes to complex data retrieval or database management.
Use Case: CMS providers can leverage your model to enhance their system's backend interface. Content managers can use natural language to request specific data, like "Find all blog posts in May 2023 with more than 500 views," and the model will generate the appropriate SQL to retrieve the information. This feature makes database management more accessible, efficient, and user-friendly.
4. Customer Support Optimization
Scenario: Customer support centers often need to retrieve client or product information stored in databases while resolving tickets or inquiries, requiring basic knowledge of SQL.
Use Case: Your model can be integrated into support ticketing systems, enabling support personnel to type requests in natural language, such as "Show all open tickets from customers in New York filed this month," and immediately receive the data needed to expedite their resolution process, improving customer service efficiency and response time.
5. Data Journalism and Research
Scenario: Journalists and researchers frequently rely on complex databases to gather insights and data points necessary for their work but may lack the technical know-how of SQL.
Use Case: By integrating your text-to-SQL model into research software or journalistic tools, professionals can query databases using natural language. For example, a journalist might input, "Retrieve the average household income in Texas in 2022," and your model would facilitate immediate access to this data, allowing for more efficient research and data-driven storytelling.
Contributing
We welcome contributions to help improve the model or address issues. Please feel free to submit pull requests or open issues to discuss changes or improvements.
### Acknowledgments
We'd like to thank the open-source community and the researchers whose foundational work laid the path to this model.
|
evolusion-ai/Evo-70B-v1 | evolusion-ai | "2023-10-28T04:32:27Z" | 1,077 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-28T03:31:00Z" | ---
language:
- en
library_name: transformers
license: llama2
---
# Evo-70B-v1
### Introduction
This is a fine-tuned Llama-2-70B model.
Website: https://evolusion.ai/
### Evaluation
| Metric | Value |
|-----------------------|-------|
| ARC (25-shot) | 68.43 |
| HellaSwag (10-shot) | 90.54 |
| MMLU (5-shot) | 70.28 |
| TruthfulQA (0-shot) | 72.15 |
| Avg. | 75.35 |
|
vihangd/smartyplats-1.1b-v2 | vihangd | "2023-11-24T02:55:04Z" | 1,077 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-24T02:50:11Z" | ---
license: apache-2.0
---
<p><h1> SmartyPlats-1.1b V2 </h1></p>
An experimental finetune of TinyLLaMA 2T with Alpaca-QLoRA
<h2> Datasets </h2>
Trained on alpca style datasets
<p><h2> Prompt Template </h2></p>
Uses alpaca style prompt template |
seungduk/Bookworm-10.7B-v0.2 | seungduk | "2024-01-01T04:42:08Z" | 1,077 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-01T04:32:21Z" | ---
license: apache-2.0
---
|
shleeeee/mistral-7b-ko-dpo-v1 | shleeeee | "2024-03-08T00:15:53Z" | 1,077 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"ko",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-02T07:08:11Z" | ---
license: other
language:
- ko
pipeline_tag: text-generation
---
# Model Card for mistral-7b-ko-dpo-v1
It is a fine-tuned model using Korean in the mistral-7b model
## Model Details
* **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park)
Input Models input text only.
Output Models generate text only.
Base Model mistralai/mistral-7B-v1
use SFT and DPO to train model |
realPCH/240103_llama_test_3 | realPCH | "2024-01-03T07:53:22Z" | 1,077 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-03T07:49:22Z" | ---
license: mit
---
|
wkshin89/yi-ko-6b-instruct-test-v0.1 | wkshin89 | "2024-01-04T09:55:08Z" | 1,077 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-04T09:52:43Z" | Entry not found |
refarde/Mistral-7B-Instruct-v0.2-Ko-S-Core | refarde | "2024-01-23T10:43:41Z" | 1,077 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"conversational",
"en",
"ko",
"dataset:royboy0416/ko-alpaca",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-14T20:03:51Z" | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
license: apache-2.0
pipeline_tag: text-generation
language:
- en
- ko
tags:
- finetuned
- text-generation
datasets:
- royboy0416/ko-alpaca
inference: false
model_type: mixtral
---
# Model Card for Mistral-7B-Instruct-v0.2-Ko-S-Core
## Model Details
* **Base Model**: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## Dataset Details
### Used Datasets
- royboy0416/ko-alpaca
|
testmod/koen-llama2-13b-sft_testver | testmod | "2024-01-31T04:41:32Z" | 1,077 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-31T03:46:01Z" | ---
language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
Auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model**
hyunseoki/ko-en-llama2-13b
**Training Objective**
instruction finetuned the model with several dataset. |
spow12/Llama3_ko_4.2_sft | spow12 | "2024-05-07T04:21:07Z" | 1,077 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ko",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-07T02:34:27Z" | ---
library_name: transformers
license: mit
language:
- en
- ko
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Helsinki-NLP/opus-mt-es-it | Helsinki-NLP | "2023-08-16T11:32:59Z" | 1,076 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-it
* source languages: es
* target languages: it
* OPUS readme: [es-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-it/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-29.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-it/opus-2020-01-29.zip)
* test set translations: [opus-2020-01-29.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-it/opus-2020-01-29.test.txt)
* test set scores: [opus-2020-01-29.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-it/opus-2020-01-29.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.it | 55.9 | 0.751 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.