modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
MaziyarPanahi/mergekit-slerp-aywerbb-GGUF | MaziyarPanahi | 2024-06-17T20:38:41Z | 2,549 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Equall/Saul-Base",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-aywerbb"
] | text-generation | 2024-06-17T20:16:01Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- conversational
- base_model:Equall/Saul-Base
- base_model:HuggingFaceH4/zephyr-7b-beta
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-aywerbb-GGUF
base_model: mergekit-community/mergekit-slerp-aywerbb
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-aywerbb-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-aywerbb-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-aywerbb](https://huggingface.co/mergekit-community/mergekit-slerp-aywerbb)
## Description
[MaziyarPanahi/mergekit-slerp-aywerbb-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-aywerbb-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-aywerbb](https://huggingface.co/mergekit-community/mergekit-slerp-aywerbb).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
ManifestoChatbot/llama-3-8b-Instruct-bnb-4bit-flori-demo | ManifestoChatbot | 2024-06-29T10:21:59Z | 2,549 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-29T10:11:32Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** iFlor
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/heavenly-mouse-v1-GGUF | mradermacher | 2024-06-04T11:58:38Z | 2,548 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:DeverDever/heavenly-mouse-v1",
"endpoints_compatible",
"region:us"
] | null | 2024-06-04T11:08:24Z | ---
base_model: DeverDever/heavenly-mouse-v1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DeverDever/heavenly-mouse-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/heavenly-mouse-v1-GGUF/resolve/main/heavenly-mouse-v1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
castorini/tct_colbert-msmarco | castorini | 2021-04-21T01:29:30Z | 2,546 | 0 | transformers | [
"transformers",
"pytorch",
"arxiv:2010.11386",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | This model is to reproduce the TCT-ColBERT dense retrieval described in the following paper:
> Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [Distilling Dense Representations for Ranking using Tightly-Coupled Teachers.](https://arxiv.org/abs/2010.11386) arXiv:2010.11386, October 2020.
For more details on how to use it, check our experiments in [Pyserini](https://github.com/castorini/pyserini/blob/master/docs/experiments-tct_colbert.md)
|
raxtemur/trocr-base-ru | raxtemur | 2024-05-29T23:05:29Z | 2,545 | 13 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"ocr",
"image-to-text",
"ru",
"en",
"dataset:nastyboget/stackmix_hkr_large",
"dataset:nastyboget/stackmix_cyrillic_large",
"dataset:nastyboget/synthetic_cyrillic_large",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-to-text | 2024-02-16T15:01:16Z | ---
license: apache-2.0
datasets:
- nastyboget/stackmix_hkr_large
- nastyboget/stackmix_cyrillic_large
- nastyboget/synthetic_cyrillic_large
language:
- ru
- en
pipeline_tag: image-to-text
widget:
- src: examples/1.png
- src: examples/2.png
- src: examples/3.png
- src: examples/4.png
tags:
- ocr
---
# Model Card for TrOCR-Ru
<!-- Provide a quick summary of what the model is/does. -->
Finetuned model [microsoft/trocr-base-handwritten](https://huggingface.co/microsoft/trocr-base-handwritten) on large synth datasets from [nastyboget](https://huggingface.co/nastyboget).
## Metrics on HKR/Cyrillic datasets
| Metric | HKR_val | HKR_test1 | HKR_test2 | CYR_val | CYR_test |
|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| Accuracy | 69.9947 | 67.4184 | 69.9187 | 72.3613 | 63.9249 |
| CER | 6.7964 | 8.9113 | 6.7278 | 6.6403 | 9.2576 |
| WER | 21.6688 | 27.3849 | 21.6200 | 27.6715 | 33.2406 |
Last update form 29/02/2024 |
alexm-nm/tinyllama-24-marlin24-4bit-channelwise | alexm-nm | 2024-05-08T15:31:07Z | 2,544 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-05-08T15:27:57Z | ---
license: apache-2.0
---
|
BioMistral/BioMistral-MedMNX | BioMistral | 2024-04-20T17:59:42Z | 2,543 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:johnsnowlabs/JSL-MedMNX-7B",
"base_model:BioMistral/BioMistral-7B-DARE",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-20T17:10:46Z | ---
license: cc-by-nc-nd-4.0
base_model:
- johnsnowlabs/JSL-MedMNX-7B
- BioMistral/BioMistral-7B-DARE
library_name: transformers
tags:
- mergekit
- merge
---
# BioMistral-MedMNX
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [johnsnowlabs/JSL-MedMNX-7B](https://huggingface.co/johnsnowlabs/JSL-MedMNX-7B) as a base.
### Models Merged
The following models were included in the merge:
* [BioMistral/BioMistral-7B-DARE](https://huggingface.co/BioMistral/BioMistral-7B-DARE)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: johnsnowlabs/JSL-MedMNX-7B
parameters:
density: 0.53
weight: 0.4
- model: BioMistral/BioMistral-7B-DARE
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
tokenizer_source: union
base_model: johnsnowlabs/JSL-MedMNX-7B
parameters:
int8_mask: true
dtype: bfloat16
```
|
w601sxs/b1ade-embed-kd_3 | w601sxs | 2024-06-10T17:18:24Z | 2,543 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"mteb",
"arxiv:1910.09700",
"model-index",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | 2024-06-10T16:53:19Z | ---
model-index:
- name: no_model_name_available
results:
- dataset:
config: en
name: MTEB MassiveScenarioClassification
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 0.7591123066577001
task:
type: Classification
- dataset:
config: default
name: MTEB ImdbClassification
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
split: test
type: mteb/imdb
metrics:
- type: accuracy
value: 0.737088
task:
type: Classification
- dataset:
config: default
name: MTEB SCIDOCS
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
split: test
type: mteb/scidocs
metrics:
- type: ndcg_at_10
value: 0.14645
task:
type: Retrieval
- dataset:
config: default
name: MTEB BIOSSES
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: cosine_spearman
value: 0.7433663356966029
task:
type: STS
- dataset:
config: default
name: MTEB TwentyNewsgroupsClustering
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
split: test
type: mteb/twentynewsgroups-clustering
metrics:
- type: v_measure
value: 0.4597050473156563
task:
type: Clustering
- dataset:
config: default
name: MTEB FEVER
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: test
type: mteb/fever
metrics:
- type: ndcg_at_10
value: 0.25419
task:
type: Retrieval
- dataset:
config: default
name: MTEB TwitterSemEval2015
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
split: test
type: mteb/twittersemeval2015-pairclassification
metrics:
- type: ap
value: 0.6721556260260827
task:
type: PairClassification
- dataset:
config: default
name: MTEB NFCorpus
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
split: test
type: mteb/nfcorpus
metrics:
- type: ndcg_at_10
value: 0.23269
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: test
type: mteb/hotpotqa
metrics:
- type: ndcg_at_10
value: 0.29778
task:
type: Retrieval
- dataset:
config: default
name: MTEB STS15
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: cosine_spearman
value: 0.8236097824265283
task:
type: STS
- dataset:
config: default
name: MTEB CQADupstackMathematicaRetrieval
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
split: test
type: mteb/cqadupstack-mathematica
metrics:
- type: ndcg_at_10
value: 0.1537
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackEnglishRetrieval
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
split: test
type: mteb/cqadupstack-english
metrics:
- type: ndcg_at_10
value: 0.24932
task:
type: Retrieval
- dataset:
config: default
name: MTEB ArxivClusteringP2P
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: v_measure
value: 0.44358822700946515
task:
type: Clustering
- dataset:
config: default
name: MTEB FiQA2018
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: test
type: mteb/fiqa
metrics:
- type: ndcg_at_10
value: 0.15899
task:
type: Retrieval
- dataset:
config: default
name: MTEB SciDocsRR
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
split: test
type: mteb/scidocs-reranking
metrics:
- type: map
value: 0.7885898217382544
task:
type: Reranking
- dataset:
config: default
name: MTEB RedditClustering
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
split: test
type: mteb/reddit-clustering
metrics:
- type: v_measure
value: 0.4966068486651516
task:
type: Clustering
- dataset:
config: default
name: MTEB Touche2020
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
split: test
type: mteb/touche2020
metrics:
- type: ndcg_at_10
value: 0.12018
task:
type: Retrieval
- dataset:
config: en
name: MTEB MTOPDomainClassification
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 0.925079799361605
task:
type: Classification
- dataset:
config: default
name: MTEB Banking77Classification
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
split: test
type: mteb/banking77
metrics:
- type: accuracy
value: 0.8135064935064935
task:
type: Classification
- dataset:
config: default
name: MTEB SummEval
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: cosine_spearman
value: 0.2889324869563879
task:
type: Summarization
- dataset:
config: default
name: MTEB TwitterURLCorpus
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
split: test
type: mteb/twitterurlcorpus-pairclassification
metrics:
- type: ap
value: 0.8389671787853494
task:
type: PairClassification
- dataset:
config: en-en
name: MTEB STS17
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_spearman
value: 0.8952714837442336
task:
type: STS
- dataset:
config: default
name: MTEB ArxivClusteringS2S
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
split: test
type: mteb/arxiv-clustering-s2s
metrics:
- type: v_measure
value: 0.37137555761396807
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackWebmastersRetrieval
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
split: test
type: mteb/cqadupstack-webmasters
metrics:
- type: ndcg_at_10
value: 0.25273
task:
type: Retrieval
- dataset:
config: en
name: MTEB AmazonCounterfactualClassification
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
split: test
type: mteb/amazon_counterfactual
metrics:
- type: accuracy
value: 0.6864179104477611
task:
type: Classification
- dataset:
config: default
name: MTEB MindSmallReranking
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
split: test
type: mteb/mind_small
metrics:
- type: map
value: 0.30932148624587386
task:
type: Reranking
- dataset:
config: default
name: MTEB MSMARCO
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: dev
type: mteb/msmarco
metrics:
- type: ndcg_at_10
value: 0.20298
task:
type: Retrieval
- dataset:
config: default
name: MTEB STS13
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: cosine_spearman
value: 0.8282788944395595
task:
type: STS
- dataset:
config: default
name: MTEB StackOverflowDupQuestions
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
split: test
type: mteb/stackoverflowdupquestions-reranking
metrics:
- type: map
value: 0.4666186087388775
task:
type: Reranking
- dataset:
config: en
name: MTEB STS22
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_spearman
value: 0.6528643554148379
task:
type: STS
- dataset:
config: en
name: MTEB MTOPIntentClassification
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 0.6971044231646147
task:
type: Classification
- dataset:
config: default
name: MTEB RedditClusteringP2P
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
split: test
type: mteb/reddit-clustering-p2p
metrics:
- type: v_measure
value: 0.5520754883265135
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackUnixRetrieval
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
split: test
type: mteb/cqadupstack-unix
metrics:
- type: ndcg_at_10
value: 0.23623
task:
type: Retrieval
- dataset:
config: default
name: MTEB AskUbuntuDupQuestions
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
split: test
type: mteb/askubuntudupquestions-reranking
metrics:
- type: map
value: 0.5611127024658868
task:
type: Reranking
- dataset:
config: default
name: MTEB QuoraRetrieval
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
split: test
type: mteb/quora
metrics:
- type: ndcg_at_10
value: 0.84318
task:
type: Retrieval
- dataset:
config: en
name: MTEB MassiveIntentClassification
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 0.7035305985205111
task:
type: Classification
- dataset:
config: default
name: MTEB SprintDuplicateQuestions
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
split: test
type: mteb/sprintduplicatequestions-pairclassification
metrics:
- type: ap
value: 0.8547019640605763
task:
type: PairClassification
- dataset:
config: default
name: MTEB AmazonPolarityClassification
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
split: test
type: mteb/amazon_polarity
metrics:
- type: accuracy
value: 0.8622175000000001
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-R
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
split: test
type: mteb/sickr-sts
metrics:
- type: cosine_spearman
value: 0.8074931072126776
task:
type: STS
- dataset:
config: default
name: MTEB StackExchangeClustering
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
split: test
type: mteb/stackexchange-clustering
metrics:
- type: v_measure
value: 0.5824562550961052
task:
type: Clustering
- dataset:
config: default
name: MTEB DBPedia
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: ndcg_at_10
value: 0.27904
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackProgrammersRetrieval
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
split: test
type: mteb/cqadupstack-programmers
metrics:
- type: ndcg_at_10
value: 0.22149
task:
type: Retrieval
- dataset:
config: default
name: MTEB MedrxivClusteringP2P
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
split: test
type: mteb/medrxiv-clustering-p2p
metrics:
- type: v_measure
value: 0.3237094363112699
task:
type: Clustering
- dataset:
config: default
name: MTEB ToxicConversationsClassification
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
split: test
type: mteb/toxic_conversations_50k
metrics:
- type: accuracy
value: 0.69345703125
task:
type: Classification
- dataset:
config: default
name: MTEB CQADupstackStatsRetrieval
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
split: test
type: mteb/cqadupstack-stats
metrics:
- type: ndcg_at_10
value: 0.20067
task:
type: Retrieval
- dataset:
config: default
name: MTEB STS16
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: cosine_spearman
value: 0.8059731462041985
task:
type: STS
- dataset:
config: default
name: MTEB SciFact
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: test
type: mteb/scifact
metrics:
- type: ndcg_at_10
value: 0.50544
task:
type: Retrieval
- dataset:
config: default
name: MTEB TweetSentimentExtractionClassification
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
split: test
type: mteb/tweet_sentiment_extraction
metrics:
- type: accuracy
value: 0.6107243916242219
task:
type: Classification
- dataset:
config: default
name: MTEB STS12
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: cosine_spearman
value: 0.7407164757075673
task:
type: STS
- dataset:
config: default
name: MTEB CQADupstackPhysicsRetrieval
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
split: test
type: mteb/cqadupstack-physics
metrics:
- type: ndcg_at_10
value: 0.31707
task:
type: Retrieval
- dataset:
config: default
name: MTEB BiorxivClusteringP2P
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
split: test
type: mteb/biorxiv-clustering-p2p
metrics:
- type: v_measure
value: 0.36100815785085566
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackGisRetrieval
revision: 5003b3064772da1887988e05400cf3806fe491f2
split: test
type: mteb/cqadupstack-gis
metrics:
- type: ndcg_at_10
value: 0.22124
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGamingRetrieval
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
split: test
type: mteb/cqadupstack-gaming
metrics:
- type: ndcg_at_10
value: 0.42459
task:
type: Retrieval
- dataset:
config: default
name: MTEB STSBenchmark
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: cosine_spearman
value: 0.7980073939799789
task:
type: STS
- dataset:
config: default
name: MTEB CQADupstackWordpressRetrieval
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: mteb/cqadupstack-wordpress
metrics:
- type: ndcg_at_10
value: 0.15801
task:
type: Retrieval
- dataset:
config: default
name: MTEB StackExchangeClusteringP2P
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
split: test
type: mteb/stackexchange-clustering-p2p
metrics:
- type: v_measure
value: 0.30348330076332997
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringS2S
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
split: test
type: mteb/biorxiv-clustering-s2s
metrics:
- type: v_measure
value: 0.3066700709222508
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackTexRetrieval
revision: 46989137a86843e03a6195de44b09deda022eec7
split: test
type: mteb/cqadupstack-tex
metrics:
- type: ndcg_at_10
value: 0.14452
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackAndroidRetrieval
revision: f46a197baaae43b4f621051089b82a364682dfeb
split: test
type: mteb/cqadupstack-android
metrics:
- type: ndcg_at_10
value: 0.34255
task:
type: Retrieval
- dataset:
config: default
name: MTEB TRECCOVID
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
split: test
type: mteb/trec-covid
metrics:
- type: ndcg_at_10
value: 0.37473
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
split: test
type: mteb/nq
metrics:
- type: ndcg_at_10
value: 0.21214
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWebmastersRetrieval
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
split: test
type: mteb/cqadupstack-webmasters
metrics:
- type: ndcg_at_10
value: 0.25273
task:
type: Retrieval
- dataset:
config: default
name: MTEB ArguAna
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: ndcg_at_10
value: 0.44416
task:
type: Retrieval
- dataset:
config: default
name: MTEB STS14
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: cosine_spearman
value: 0.7544709602964104
task:
type: STS
- dataset:
config: default
name: MTEB MedrxivClusteringS2S
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
split: test
type: mteb/medrxiv-clustering-s2s
metrics:
- type: v_measure
value: 0.2932525018378166
task:
type: Clustering
- dataset:
config: en
name: MTEB AmazonReviewsClassification
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 0.43157999999999996
task:
type: Classification
- dataset:
config: default
name: MTEB ClimateFEVER
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: ndcg_at_10
value: 0.11327
task:
type: Retrieval
- dataset:
config: default
name: MTEB EmotionClassification
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: test
type: mteb/emotion
metrics:
- type: accuracy
value: 0.48450000000000004
task:
type: Classification
tags:
- mteb
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & metricss
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### metricss
<!-- These are the evaluation metricss being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 | PracticeLLM | 2024-06-21T05:48:13Z | 2,542 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"ko",
"base_model:upstage/SOLAR-10.7B-v1.0",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-26T18:15:42Z | ---
language:
- en
- ko
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
base_model:
- upstage/SOLAR-10.7B-v1.0
- Yhyu13/LMCocktail-10.7B-v1
model-index:
- name: SOLAR-tail-10.7B-Merge-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
---
# **SOLAR-tail-10.7B-Merge-v1.0**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Method**
Using [Mergekit](https://github.com/cg123/mergekit).
- [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
- [Yhyu13/LMCocktail-10.7B-v1](Yhyu13/LMCocktail-10.7B-v1)
**Merge config**
```
slices:
- sources:
- model: upstage/SOLAR-10.7B-v1.0
layer_range: [0, 48]
- model: Yhyu13/LMCocktail-10.7B-v1
layer_range: [0, 48]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-v1.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: float16
```
# **Model Benchmark**
## Open Ko leaderboard
- Follow up as [Ko-link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Ko-CommonGenV2 |
| --- | --- | --- | --- | --- | --- | --- |
| PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 | 48.32 | 45.73 | 56.97 | 38.77 | 38.75 | 61.16 |
| jjourney1125/M-SOLAR-10.7B-v1.0 | 55.15 | 49.57 | 60.12 | 54.60 | 49.23 | 62.22 |
- Follow up as [En-link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 | 71.68 | 66.13 | 86.54 | **66.52** | 60.57 | **84.77** | **65.58** |
| kyujinpy/Sakura-SOLAR-Instruct | **74.40** | **70.99** | **88.42** | 66.33 | **71.79** | 83.66 | 65.20 |
## lm-evaluation-harness
```
gpt2 (pretrained=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.5021|± |0.0133|
| | |macro_f1|0.3343|± |0.0059|
|kobest_copa | 0|acc |0.6220|± |0.0153|
| | |macro_f1|0.6217|± |0.0154|
|kobest_hellaswag| 0|acc |0.4380|± |0.0222|
| | |acc_norm|0.5380|± |0.0223|
| | |macro_f1|0.4366|± |0.0222|
|kobest_sentineg | 0|acc |0.4962|± |0.0251|
| | |macro_f1|0.3316|± |0.0113|
```
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PracticeLLM__SOLAR-tail-10.7B-Merge-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.68|
|AI2 Reasoning Challenge (25-Shot)|66.13|
|HellaSwag (10-Shot) |86.54|
|MMLU (5-Shot) |66.52|
|TruthfulQA (0-shot) |60.57|
|Winogrande (5-shot) |84.77|
|GSM8k (5-shot) |65.58|
|
RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf | RichardErkhov | 2024-06-16T11:50:34Z | 2,542 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-16T07:56:06Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
openbuddy-llama3-8b-v21.1-8k - GGUF
- Model creator: https://huggingface.co/OpenBuddy/
- Original model: https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.1-8k/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [openbuddy-llama3-8b-v21.1-8k.Q2_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q2_K.gguf) | Q2_K | 2.96GB |
| [openbuddy-llama3-8b-v21.1-8k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [openbuddy-llama3-8b-v21.1-8k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [openbuddy-llama3-8b-v21.1-8k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [openbuddy-llama3-8b-v21.1-8k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [openbuddy-llama3-8b-v21.1-8k.Q3_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q3_K.gguf) | Q3_K | 3.74GB |
| [openbuddy-llama3-8b-v21.1-8k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [openbuddy-llama3-8b-v21.1-8k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [openbuddy-llama3-8b-v21.1-8k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [openbuddy-llama3-8b-v21.1-8k.Q4_0.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q4_0.gguf) | Q4_0 | 4.34GB |
| [openbuddy-llama3-8b-v21.1-8k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [openbuddy-llama3-8b-v21.1-8k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [openbuddy-llama3-8b-v21.1-8k.Q4_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q4_K.gguf) | Q4_K | 4.58GB |
| [openbuddy-llama3-8b-v21.1-8k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [openbuddy-llama3-8b-v21.1-8k.Q4_1.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q4_1.gguf) | Q4_1 | 4.78GB |
| [openbuddy-llama3-8b-v21.1-8k.Q5_0.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q5_0.gguf) | Q5_0 | 5.21GB |
| [openbuddy-llama3-8b-v21.1-8k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [openbuddy-llama3-8b-v21.1-8k.Q5_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q5_K.gguf) | Q5_K | 5.34GB |
| [openbuddy-llama3-8b-v21.1-8k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [openbuddy-llama3-8b-v21.1-8k.Q5_1.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q5_1.gguf) | Q5_1 | 5.65GB |
| [openbuddy-llama3-8b-v21.1-8k.Q6_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q6_K.gguf) | Q6_K | 6.14GB |
| [openbuddy-llama3-8b-v21.1-8k.Q8_0.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-llama3-8b-v21.1-8k-gguf/blob/main/openbuddy-llama3-8b-v21.1-8k.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- fi
pipeline_tag: text-generation
tags:
- llama-3
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Run locally with 🦙Ollama
```
ollama run openbuddy/openbuddy-llama3-8b-v21.1-8k
```
# Copyright Notice
**Built with Meta Llama 3**
License: https://llama.meta.com/llama3/license/
Acceptable Use Policy: https://llama.meta.com/llama3/use-policy
This model is intended for use in English and Chinese.
# Prompt Format
We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like `<|role|>`, `<|says|>` and `<|end|>`.
```
<|role|>system<|says|>You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user).
Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
You cannot access the internet, but you have vast knowledge, cutoff: 2023-04.
You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), not related to GPT or OpenAI.<|end|>
<|role|>user<|says|>History input 1<|end|>
<|role|>assistant<|says|>History output 1<|end|>
<|role|>user<|says|>History input 2<|end|>
<|role|>assistant<|says|>History output 2<|end|>
<|role|>user<|says|>Current input<|end|>
<|role|>assistant<|says|>
```
This format is also defined in `tokenizer_config.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the [vllm documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html).
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
|
IDEA-CCNL/Erlangshen-MegatronBert-1.3B-NLI | IDEA-CCNL | 2023-05-26T06:33:52Z | 2,541 | 3 | transformers | [
"transformers",
"pytorch",
"megatron-bert",
"text-classification",
"bert",
"NLU",
"NLI",
"zh",
"arxiv:2209.02970",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-12T04:02:48Z | ---
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- NLI
inference: true
widget:
- text: "今天心情不好[SEP]今天很开心"
---
# Erlangshen-MegatronBert-1.3B-NLI
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## 简介 Brief Introduction
2021年登顶FewCLUE和ZeroCLUE的中文BERT,在数个推理任务微调后的版本
This is the fine-tuned version of the Chinese BERT model on several NLI datasets, which topped FewCLUE and ZeroCLUE benchmark in 2021
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | MegatronBert | 1.3B | 自然语言推断 NLI |
## 模型信息 Model Information
基于[Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B),我们在收集的4个中文领域的NLI(自然语言推理)数据集,总计1014787个样本上微调了一个NLI版本。
Based on [Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B), we fine-tuned a NLI version on 4 Chinese Natural Language Inference (NLI) datasets, with totaling 1,014,787 samples.
### 下游效果 Performance
| 模型 Model | cmnli | ocnli | snli |
| :--------: | :-----: | :----: | :-----: |
| Erlangshen-Roberta-110M-NLI | 80.83 | 78.56 | 88.01 |
| Erlangshen-Roberta-330M-NLI | 82.25 | 79.82 | 88.00 |
| Erlangshen-MegatronBert-1.3B-NLI | 84.52 | 84.17 | 88.67 |
## 使用 Usage
``` python
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizer
import torch
tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-NLI')
model=AutoModelForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-NLI')
texta='今天的饭不好吃'
textb='今天心情不好'
output=model(torch.tensor([tokenizer.encode(texta,textb)]))
print(torch.nn.functional.softmax(output.logits,dim=-1))
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
Bingsu/clip-vit-base-patch32-ko | Bingsu | 2022-11-08T11:02:10Z | 2,541 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"clip",
"zero-shot-image-classification",
"ko",
"arxiv:2004.09813",
"doi:10.57967/hf/1615",
"license:mit",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | 2022-09-16T05:18:05Z | ---
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: 기타치는 고양이, 피아노 치는 강아지
example_title: Guitar, cat and dog
language: ko
license: mit
---
# clip-vit-base-patch32-ko
Korean CLIP model trained by [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)
[Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)로 학습된 한국어 CLIP 모델입니다.
훈련 코드: <https://github.com/Bing-su/KoCLIP_training_code>
사용된 데이터: AIHUB에 있는 모든 한국어-영어 병렬 데이터
## How to Use
#### 1.
```python
import requests
import torch
from PIL import Image
from transformers import AutoModel, AutoProcessor
repo = "Bingsu/clip-vit-base-patch32-ko"
model = AutoModel.from_pretrained(repo)
processor = AutoProcessor.from_pretrained(repo)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["고양이 두 마리", "개 두 마리"], images=image, return_tensors="pt", padding=True)
with torch.inference_mode():
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)
```
```python
>>> probs
tensor([[0.9926, 0.0074]])
```
#### 2.
```python
from transformers import pipeline
repo = "Bingsu/clip-vit-base-patch32-ko"
pipe = pipeline("zero-shot-image-classification", model=repo)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
result = pipe(images=url, candidate_labels=["고양이 한 마리", "고양이 두 마리", "분홍색 소파에 드러누운 고양이 친구들"], hypothesis_template="{}")
```
```python
>>> result
[{'score': 0.9456236958503723, 'label': '분홍색 소파에 드러누운 고양이 친구들'},
{'score': 0.05315302312374115, 'label': '고양이 두 마리'},
{'score': 0.0012233294546604156, 'label': '고양이 한 마리'}]
```
## Tokenizer
토크나이저는 한국어 데이터와 영어 데이터를 7:3 비율로 섞어, 원본 CLIP 토크나이저에서 `.train_new_from_iterator`를 통해 학습되었습니다.
https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/clip/modeling_clip.py#L661-L666
```python
# text_embeds.shape = [batch_size, sequence_length, transformer.width]
# take features from the eot embedding (eot_token is the highest number in each sequence)
# casting to torch.int for onnx compatibility: argmax doesn't support int64 inputs with opset 14
pooled_output = last_hidden_state[
torch.arange(last_hidden_state.shape[0]), input_ids.to(torch.int).argmax(dim=-1)
]
```
CLIP 모델은 `pooled_output`을 구할때 id가 가장 큰 토큰을 사용하기 때문에, eos 토큰은 가장 마지막 토큰이 되어야 합니다.
|
timm/convnext_atto_ols.a2_in1k | timm | 2024-02-10T23:26:51Z | 2,541 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | 2022-12-13T07:06:15Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for convnext_atto_ols.a2_in1k
A ConvNeXt image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 3.7
- GMACs: 0.6
- Activations (M): 4.1
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/huggingface/pytorch-image-models
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_atto_ols.a2_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_atto_ols.a2_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 40, 56, 56])
# torch.Size([1, 80, 28, 28])
# torch.Size([1, 160, 14, 14])
# torch.Size([1, 320, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_atto_ols.a2_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 320, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
|
RichardErkhov/01-ai_-_Yi-1.5-9B-gguf | RichardErkhov | 2024-06-14T21:35:39Z | 2,541 | 0 | null | [
"gguf",
"arxiv:2403.04652",
"region:us"
] | null | 2024-06-14T20:35:24Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Yi-1.5-9B - GGUF
- Model creator: https://huggingface.co/01-ai/
- Original model: https://huggingface.co/01-ai/Yi-1.5-9B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Yi-1.5-9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q2_K.gguf) | Q2_K | 3.12GB |
| [Yi-1.5-9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.IQ3_XS.gguf) | IQ3_XS | 3.46GB |
| [Yi-1.5-9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.IQ3_S.gguf) | IQ3_S | 3.64GB |
| [Yi-1.5-9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q3_K_S.gguf) | Q3_K_S | 3.63GB |
| [Yi-1.5-9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.IQ3_M.gguf) | IQ3_M | 3.78GB |
| [Yi-1.5-9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q3_K.gguf) | Q3_K | 4.03GB |
| [Yi-1.5-9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q3_K_M.gguf) | Q3_K_M | 4.03GB |
| [Yi-1.5-9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q3_K_L.gguf) | Q3_K_L | 4.37GB |
| [Yi-1.5-9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.IQ4_XS.gguf) | IQ4_XS | 4.5GB |
| [Yi-1.5-9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q4_0.gguf) | Q4_0 | 4.69GB |
| [Yi-1.5-9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.IQ4_NL.gguf) | IQ4_NL | 4.73GB |
| [Yi-1.5-9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q4_K_S.gguf) | Q4_K_S | 4.72GB |
| [Yi-1.5-9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q4_K.gguf) | Q4_K | 4.96GB |
| [Yi-1.5-9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q4_K_M.gguf) | Q4_K_M | 4.96GB |
| [Yi-1.5-9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q4_1.gguf) | Q4_1 | 5.19GB |
| [Yi-1.5-9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q5_0.gguf) | Q5_0 | 5.69GB |
| [Yi-1.5-9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q5_K_S.gguf) | Q5_K_S | 5.69GB |
| [Yi-1.5-9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q5_K.gguf) | Q5_K | 5.83GB |
| [Yi-1.5-9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q5_K_M.gguf) | Q5_K_M | 5.83GB |
| [Yi-1.5-9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q5_1.gguf) | Q5_1 | 6.19GB |
| [Yi-1.5-9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q6_K.gguf) | Q6_K | 6.75GB |
| [Yi-1.5-9B.Q8_0.gguf](https://huggingface.co/RichardErkhov/01-ai_-_Yi-1.5-9B-gguf/blob/main/Yi-1.5-9B.Q8_0.gguf) | Q8_0 | 8.74GB |
Original model description:
---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://01-ai.github.io/">💪 Tech Blog</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🟣 wisemodel](https://wisemodel.cn/organization/01.AI) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf | RichardErkhov | 2024-06-15T03:47:07Z | 2,541 | 1 | null | [
"gguf",
"region:us"
] | null | 2024-06-15T02:55:53Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
InfinityNexus_9B - GGUF
- Model creator: https://huggingface.co/ChaoticNeutrals/
- Original model: https://huggingface.co/ChaoticNeutrals/InfinityNexus_9B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [InfinityNexus_9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q2_K.gguf) | Q2_K | 3.13GB |
| [InfinityNexus_9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.IQ3_XS.gguf) | IQ3_XS | 3.48GB |
| [InfinityNexus_9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.IQ3_S.gguf) | IQ3_S | 3.67GB |
| [InfinityNexus_9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q3_K_S.gguf) | Q3_K_S | 3.65GB |
| [InfinityNexus_9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.IQ3_M.gguf) | IQ3_M | 3.79GB |
| [InfinityNexus_9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q3_K.gguf) | Q3_K | 4.05GB |
| [InfinityNexus_9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q3_K_M.gguf) | Q3_K_M | 4.05GB |
| [InfinityNexus_9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q3_K_L.gguf) | Q3_K_L | 4.41GB |
| [InfinityNexus_9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.IQ4_XS.gguf) | IQ4_XS | 4.55GB |
| [InfinityNexus_9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q4_0.gguf) | Q4_0 | 4.74GB |
| [InfinityNexus_9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.IQ4_NL.gguf) | IQ4_NL | 4.79GB |
| [InfinityNexus_9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q4_K_S.gguf) | Q4_K_S | 4.78GB |
| [InfinityNexus_9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q4_K.gguf) | Q4_K | 5.04GB |
| [InfinityNexus_9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q4_K_M.gguf) | Q4_K_M | 5.04GB |
| [InfinityNexus_9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q4_1.gguf) | Q4_1 | 5.26GB |
| [InfinityNexus_9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q5_0.gguf) | Q5_0 | 5.77GB |
| [InfinityNexus_9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q5_K_S.gguf) | Q5_K_S | 5.77GB |
| [InfinityNexus_9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q5_K.gguf) | Q5_K | 5.93GB |
| [InfinityNexus_9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q5_K_M.gguf) | Q5_K_M | 5.93GB |
| [InfinityNexus_9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q5_1.gguf) | Q5_1 | 6.29GB |
| [InfinityNexus_9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q6_K.gguf) | Q6_K | 6.87GB |
| [InfinityNexus_9B.Q8_0.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_InfinityNexus_9B-gguf/blob/main/InfinityNexus_9B.Q8_0.gguf) | Q8_0 | 8.89GB |
Original model description:
---
base_model:
- Endevor/InfinityRP-v1-7B
- jeiku/NarrativeNexus_7B
library_name: transformers
tags:
- mergekit
- merge
license: other
language:
- en
---
# InfinityNexus

GGUF available here: https://huggingface.co/Lewdiculous/InfinityNexus_9B-GGUF-IQ-Imatrix
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B)
* [jeiku/NarrativeNexus_7B](https://huggingface.co/jeiku/NarrativeNexus_7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Endevor/InfinityRP-v1-7B
layer_range: [0, 20]
- sources:
- model: jeiku/NarrativeNexus_7B
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
```
|
elyza/ELYZA-japanese-Llama-2-13b-fast-instruct | elyza | 2023-12-27T01:41:51Z | 2,540 | 22 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ja",
"en",
"arxiv:2307.09288",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-25T18:14:10Z | ---
license: llama2
language:
- ja
- en
---
## ELYZA-japanese-Llama-2-13b-fast-instruct

### Model Description
**ELYZA-japanese-Llama-2-13b** は、 Llama 2をベースとして日本語能力を拡張するために追加事前学習を行ったモデルです。
詳細は [Blog記事](https://note.com/elyza/n/n5d42686b60b7) を参照してください。
### Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。"
text = "仕事の熱意を取り戻すためのアイデアを5つ挙げてください。"
model_name = "elyza/ELYZA-japanese-Llama-2-13b-fast-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
use_cache=True,
device_map="auto",
low_cpu_mem_usage=True,
)
model.eval()
prompt = "{bos_token}{b_inst} {system}{prompt} {e_inst} ".format(
bos_token=tokenizer.bos_token,
b_inst=B_INST,
system=f"{B_SYS}{DEFAULT_SYSTEM_PROMPT}{E_SYS}",
prompt=text,
e_inst=E_INST,
)
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1) :], skip_special_tokens=True)
print(output)
```
### ELYZA-japanese-Llama-2-13b Models
| Model Name | Vocab Size | #Params |
|:---------------------------------------------|:----------:|:-------:|
|[elyza/ELYZA-japanese-Llama-2-13b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b)| 32000 | 13.02B |
|[elyza/ELYZA-japanese-Llama-2-13b-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-instruct)| 32000 | 13.02B |
|[elyza/ELYZA-japanese-Llama-2-13b-fast](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-fast)| 44581 | 13.14B |
|[elyza/ELYZA-japanese-Llama-2-13b-fast-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-fast-instruct)| 44581 | 13.14B |
### Developers
- [Akira Sasaki](https://huggingface.co/akirasasaki)
- [Masato Hirakawa](https://huggingface.co/m-hirakawa)
- [Shintaro Horie](https://huggingface.co/e-mon)
- [Tomoaki Nakamura](https://huggingface.co/tyoyo)
- [Sam Passaglia](https://huggingface.co/passaglia)
- [Daisuke Oba](https://huggingface.co/daisuk30ba) (intern)
### Licence
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
### How to Cite
```tex
@misc{elyzallama2023,
title={ELYZA-japanese-Llama-2-13b},
url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b},
author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura and Sam Passaglia and Daisuke Oba},
year={2023},
}
```
### Citations
```tex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF | mradermacher | 2024-06-10T09:28:52Z | 2,540 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"not-for-all-audiences",
"en",
"base_model:Casual-Autopsy/Jamet-L3-Stheno-BlackOasis-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T08:22:33Z | ---
base_model: Casual-Autopsy/Jamet-L3-Stheno-BlackOasis-8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Casual-Autopsy/Jamet-L3-Stheno-BlackOasis-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Jamet-L3-Stheno-BlackOasis-8B-GGUF/resolve/main/Jamet-L3-Stheno-BlackOasis-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
unsloth/mistral-7b-v0.3 | unsloth | 2024-05-22T18:24:46Z | 2,538 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"mistral-7b",
"mistral-instruct",
"instruct",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-22T18:06:41Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- mistral
- mistral-7b
- mistral-instruct
- instruct
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
We have a Google Colab Tesla T4 notebook for Mistral v3 7b here: https://colab.research.google.com/drive/1_yNCks4BTD5zOnjozppphh5GzMFaMKq_?usp=sharing
For conversational ShareGPT style and using Mistral v3 Instruct: https://colab.research.google.com/drive/15F1xyn8497_dUbxZP4zWmPZ3PJx1Oymv?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
Chrisisis/5DtjX4AHYbPDSnjf347seM7R88uzCDrXACiHzspVAVWMDUdR_vgg | Chrisisis | 2024-02-24T08:26:08Z | 2,537 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-02-05T18:35:27Z | Entry not found |
projecte-aina/FLOR-6.3B | projecte-aina | 2024-03-19T08:33:00Z | 2,536 | 26 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"FLOR",
"spanish",
"catalan",
"english",
"en",
"es",
"ca",
"dataset:projecte-aina/CATalog",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-15T13:31:04Z | ---
language:
- en
- es
- ca
licence:
- apache-2.0
tags:
- FLOR
- bloom
- spanish
- catalan
- english
pipeline_tag: text-generation
widget:
- text: |-
Respon a la pregunta següent.
Pregunta: "Quina és la capital de Suècia?"
Resposta: "La capital de Suècia és Estocolm."
----
Respon a la pregunta següent.
Pregunta: "Quina beguda es consumeix als matins per despertar-se?"
Resposta: "La majoria de gent consumeix cafè per despertar-se."
----
Respon a la pregunta següent.
Pregunta: "Explica com funciona un motor de combustió"
Resposta:
example_title: Pregunta-Resposta
- text: >-
Extrae las entidades nombradas del siguiente texto:
Texto: "Me llamo Wolfgang y vivo en Berlin"
Entidades: Wolfgang:PER, Berlin:LOC
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Hoy voy a visitar el parc güell tras salir del barcelona
supercomputing center"
Entidades: parc güell:LOC, barcelona supercomputing center:LOC
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Maria y Miguel no tienen ningún problema contigo"
Entidades: Maria:PER, Miguel:PER
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Damián se cortó el pelo"
Entidades: Damián:PER
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Lo mejor de Barcelona és el bar de mi amigo Pablo"
Entidades: Pablo:PER, Barcelona:LOC
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Carlos comparte piso con Marc"
Entidades:
example_title: Entidades-Nombradas
datasets:
- projecte-aina/CATalog
---
# FLOR-6.3B
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-uses-and-limitations)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Evaluation](#evaluation)
- [Additional information](#additional-information)
</details>
## Model description
**FLOR-6.3B** is a 6.3B-parameter transformer-based causal language model for Catalan, Spanish, and English.
It is the result of a language adaptation technique performed on [BLOOM-7.1B](https://huggingface.co/bigscience/bloom-7b1),
which involves modifying the model's vocabulary and embedding layer, and continuously pre-training the model with 140B tokens in our target languages.
For more details, take a look at [this blogpost](https://medium.com/@mpamies247/flor-6-3b-a-chinchilla-compliant-model-for-catalan-spanish-and-english-7cdb389a9aac) about the project.
## Intended uses and limitations
The **FLOR-6.3B** model is ready-to-use only for causal language modeling.
It can perform text-generation tasks and be fine-tuned for specific scenarios.
## How to use
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
input_text = "Sovint em trobo pensant en tot allò que"
model_id = "projecte-aina/FLOR-6.3B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
generator = pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
generation = generator(
input_text,
do_sample=True,
top_k=10,
eos_token_id=tokenizer.eos_token_id,
)
print(f"Result: {generation[0]['generated_text']}")
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model.
However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques
on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Language adaptation and training
The language adaptation technique used to create FLOR-6.3B requires the vocabulary of the source model
to be adapted before continuing its pre-training with data in the target languages. Specifically, we proceeded as follows:
1) We trained our own BPE tokenizer for Catalan, Spanish, and English, and replaced the original BLOOM tokenizer and vocabulary with it. This procedure implied a downsizing of the original BLOOM's embedding layer and, therefore, a model compression from 7.1B parameters to 6.3B.
2) The embeddings corresponding to tokens that are present in both the original and the target vocabulary (matching tokens) were used for initialization.
3) The embeddings from tokens not present in BLOOM's original vocabulary were initialized as the average of all embeddings.
4) The model was initialized with the weights from BLOOM-7.1B, and with our adapted tokenizer (step 1) and embeddings (steps 2-3).
5) The model was then trained on a corpus that contains a mixture of Catalan, Spanish, and English data.
### Training data
The training corpus is composed of 140B tokens gathered from web crawlings and public domain data. Most of the sources in Catalan have been obtained from the [CATalog 1.0](https://huggingface.co/datasets/projecte-aina/CATalog) dataset, filtered with a minimum threshold of 0.6 and oversampling some of the sources it integrates to different extents.
Dataset | Language | Words (per-epoch) | Epochs | Total Tokens |
|---------------------|----------|--------------------|--------------|--------------|
mc4 | ca | 5,861.79M | 1.5 | 13,452.81M |
MaCoCu | ca | 1,658.89M | 2 | 5,076.21M |
CaWac | ca | 1,286.83M | 2.5 | 4,922.14M |
oscar-2301 | ca | 1,784.57M | 1.75 | 4,778.17M |
RacoCatala Articles | ca | 358.57M | 4 | 2,194.42M |
RacoCatala Forums | ca | 1,301.12M | 1 | 1,990.71M |
Tesis (TDX) | ca | 323.60M | 4 | 1,980.46M |
oscar-2201 | ca | 1,155.35M | 1 | 1,767.69M |
Wikipedia | ca | 266.69M | 4 | 1,632.17M |
Nació Digital | ca | 216.27M | 4 | 1,323.59M |
colossal-oscar-05-06-23 | ca | 207.59M | 4 | 1,270.43M |
colossal-oscar-03-04-23 | ca | 195.43M | 4 | 1,196.01M |
colossal-oscar-2022-27 | ca | 195.03M | 4 | 1,193.59M |
Crawling populars | ca | 683.25M | 1 | 1,045.38M |
El Món | ca | 85.27M | 4 | 521.85M |
ACN | ca | 81.25M | 4 | 497.22M |
DOGV | ca | 76.48M | 4 | 468.05M |
DOGC | ca | 70.51M | 4 | 431.51M |
Vilaweb | ca | 46.90M | 4 | 287.04M |
hplt | ca | 160.27M | 1 | 245.21M |
Les Corts Valencianes | ca | 26.88M | 4 | 164.53M |
IB3 | ca | 15.82M | 4 | 96.82M |
BOUA | ca | 13.42M | 4 | 82.13M |
Parlament | ca | 10.09M | 4 | 61.77M |
Aquí Berguedà | ca | 8.23M | 4 | 50.34M |
Wikimedia | ca | 3.90M | 4 | 23.88M |
Gutenberg | ca | 1.29M | 4 | 7.87M |
OSCAR 23.01 | es | 53,244.56M | 0.303 | 23,070.34M |
colossal_oscar_05-06-23 | es | 5,548.27M | 1 | 7,934.02M |
colossal_oscar_03-04-23 | es | 5,090.46M | 1 | 7,279.36M |
All_bio_corpora | es | 954.85M | 2 | 2,730.88M |
Wikipedia | es | 777.49M | 2 | 2,223.63M |
BOE | es | 1,031.28M | 1 | 1,474.73M |
Tesis (TDX) | es | 268.66M | 2 | 768.37M |
Eurlex | es | 459.19M | 1 | 656.64M |
CSIC | es | 156.76M | 2 | 448.33M |
BORME | es | 63.23M | 1 | 90.42M |
colossal_oscar_05-06-23 | en | 51,615.35M | 0.25 | 21,162.30M |
colossal_oscar_03-04-23 | en | 49,454.01M | 0.14 | 11,354.64M |
Wikipedia | en | 2,116.53M | 2 | 6,942.23M |
Gutenberg | en | 3,513.82M | 1 | 5,762.66M |
Eurlex | en | 438.92M | 1 | 719.83M |
legal-mc4 | en | 417.97M | 1 | 685.47M |
### Languages
The training data has the same amount of Catalan, Spanish, and English texts.
The table below shows the final language distribution:
|Language|Percentage|
|--------|----------|
| Catalan (CA) | 33.39% |
| Spanish (ES) | 33.32% |
| English (EN) | 33.29% |
### Framework
The training was conducted in 16 Cerebras' [CS-2 systems](https://www.cerebras.net/product-system/)
using the [cs-2.0.2](https://github.com/Cerebras/modelzoo/releases/tag/Release_2.0.2) release of their software.
## Evaluation
FLOR-6.3B has been evaluated in a 5-shot setting, using EleutherAI's *LM Evaluation Harness*.
The evaluation benchmark includes tasks in Catalan, Spanish, and English, with particular emphasis on Catalan datasets.
The tasks were chosen to cover several evaluation areas in order to provide a comprehensive overview of the model's capabilities.
The baselines used to compare our results are multilingual and English open-source 7B models and smaller models of the FLOR family of models: **TBC**.
Our implementation of EleutherAI's *LM Evaluation Harness* can be found [here](https://github.com/langtech-bsc/lm-evaluation-harness/tree/FLOR-eval).
The following is a list of evaluation areas and their respective datasets:
- Reading Comprehension: [Belebele](https://huggingface.co/datasets/facebook/belebele)
- Question Answering: [XQuAD](https://huggingface.co/datasets/xquad), [CatalanQA](https://huggingface.co/datasets/projecte-aina/catalanqa), [CoQCat](https://huggingface.co/datasets/projecte-aina/CoQCat)
- Natural Language Inference: [XNLI](https://huggingface.co/datasets/xnli) and its translation to Catalan ([XNLI-ca](https://huggingface.co/datasets/projecte-aina/xnli-ca)), [TE-ca](https://huggingface.co/datasets/projecte-aina/teca)
- Paraphrase Identification: [PAWS-X](https://huggingface.co/datasets/paws-x) and its translation to Catalan ([PAWS-ca](https://huggingface.co/datasets/projecte-aina/PAWS-ca)), [Parafraseja](https://huggingface.co/datasets/projecte-aina/Parafraseja)
- Commonsense Reasoning: [COPA](https://people.ict.usc.edu/~gordon/copa.html) and its translation to Catalan ([COPA-ca](https://huggingface.co/datasets/projecte-aina/COPA-ca))
- Translation: [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
### Results
| Dataset | Lang. | Task | FLOR-6.3B | BLOOM-7.1B |
|-------------|--------|----------------------------|-------------|-------------|
| Teca | ca | Natural Language Inference | **49.79**🔥 | 46.91 |
| XNLI | ca | Natural Language Inference | **51.70**🔥 | 49.20 |
| XNLI | es | Natural Language Inference | **50.28**🔥 | 47.62 |
| XNLI | en | Natural Language Inference | **52.55**🔥 | 51.96 |
| Belebele | ca | Reading Comprehension | **48.98**🔥 | 48.57 |
| Belebele | es | Reading Comprehension | **48.16** | **48.16** |
| Belebele | en | Reading Comprehension | 49.80 | **50.20**🔥 |
| CatalanQA | ca | Question Answering | **71.80**🔥 | 69.54 |
| CoQCat | ca | Question Answering | **65.96**🔥 | 58.49 |
| XQuAD | ca | Question Answering | 59.01 | **60.94**🔥 |
| XQuAD | es | Question Answering | **63.80**🔥 | 61.76 |
| XQuAD | en | Question Answering | **70.02**🔥 | 69.76 |
| COPA | ca | Question Answering | **78.00**🔥 | 72.60 |
| COPA | en | Question Answering | **81.00**🔥 | 79.00 |
| XStoryCloze | es | Question Answering | **69.82**🔥 | 66.45 |
| XStoryCloze | en | Question Answering | **74.45**🔥 | 70.81 |
| Parafraseja | ca | Paraphrase Identification | **62.88**🔥 | 60.27 |
| PAWS-X | ca | Paraphrase Identification | **59.70**🔥 | 59.35 |
| PAWS-X | es | Paraphrase Identification | 57.70 | **58.65**🔥 |
| PAWS-X | en | Paraphrase Identification | 59.65 | **62.85**🔥 |
| FLoRes | ca->es | Machine Translation | **24.98**🔥 | 24.21 |
| FLoRes | es->ca | Machine Translation | **25.24**🔥 | 23.19 |
| FLoRes | ca->en | Machine Translation | **42.89**🔥 | 40.93 |
| FLoRes | en->ca | Machine Translation | **39.29**🔥 | 34.30 |
| FLoRes | es->en | Machine Translation | **28.61**🔥 | 27.48 |
| FLoRes | en->es | Machine Translation | **25.35**🔥 | 23.72 |
Note: The metrics are F1-score for question-answering tasks, BLEU for translation, and accuracy for the rest.
## Additional information
### Author
The Language Technologies Unit from Barcelona Supercomputing Center.
### Contact
For further information, please send an email to <[email protected]>.
### Copyright
Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Disclaimer
<details>
<summary>Click to expand</summary>
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
Be aware that the model may have biases and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it)
or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and,
in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the model (Barcelona Supercomputing Center)
be liable for any results arising from the use made by third parties.
</details> |
Qdrant/bge-large-en-v1.5-onnx | Qdrant | 2024-01-16T08:37:14Z | 2,536 | 0 | transformers | [
"transformers",
"onnx",
"bert",
"feature-extraction",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | 2024-01-16T08:35:46Z | Entry not found |
digiplay/SomethingPhenomenal_vivacityV2 | digiplay | 2024-04-13T03:46:43Z | 2,536 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-04-13T00:38:35Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/65849?modelVersionId=76306
|
ruslanmv/llama3-8B-medical | ruslanmv | 2024-05-15T12:12:08Z | 2,536 | 8 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"ruslanmv",
"trl",
"en",
"dataset:ruslanmv/ai-medical-chatbot",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2024-04-24T14:29:11Z | ---
language: en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- ruslanmv
- llama
- trl
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- ruslanmv/ai-medical-chatbot
model-index:
- name: llama3-8B-medical
results: []
widget:
- example_title: llama3-8B-medical
messages:
- role: system
content: >-
You are an AI Medical Chatbot Assistant, providing comprehensive and informative responses to your inquiries.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something incorrect.
- role: user
content: Im a 35-year-old male experiencing symptoms like fatigue, increased sensitivity to cold, and dry, itchy skin. Could these be indicative of hypothyroidism?
output:
text: >-
Yes, it is possible. Hypothyroidism can present symptoms like increased sensitivity to cold, dry skin, and fatigue. These symptoms are characteristic of hypothyroidism.
I recommend consulting with a healthcare provider.
---
# Medical-Llama3-8B-4bit: Fine-Tuned Llama3 for Medical Q&A
[](https://ruslanmv.com/)
Medical fine tuned version of LLAMA-3-8B quantized in 4 bits using common open source datasets and showing improvements over multilingual tasks. It has been used the standard bitquantized technique for post-fine-tuning quantization reducing the computational time complexity and space complexity required to run the model. The overall architecture it's all LLAMA-3 based.
This repository provides a fine-tuned version of the powerful Llama3 8B model, specifically designed to answer medical questions in an informative way. It leverages the rich knowledge contained in the AI Medical Chatbot dataset ([ruslanmv/ai-medical-chatbot](https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot)).
**Model & Development**
- **Developed by:** ruslanmv
- **License:** Apache-2.0
- **Finetuned from model:** meta-llama/Meta-Llama-3-8B
**Key Features**
- **Medical Focus:** Optimized to address health-related inquiries.
- **Knowledge Base:** Trained on a comprehensive medical chatbot dataset.
- **Text Generation:** Generates informative and potentially helpful responses.
**Installation**
This model is accessible through the Hugging Face Transformers library. Install it using pip:
```bash
pip install git+https://github.com/huggingface/accelerate.git
pip install git+https://github.com/huggingface/transformers.git
pip install bitsandbytes
```
**Usage Example**
Here's a Python code snippet demonstrating how to interact with the `llama3-8B-medical` model and generate answers to your medical questions:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
# Load tokenizer and model
model_id = "ruslanmv/llama3-8B-medical"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = AutoModelForCausalLM.from_pretrained(model_id, config=quantization_config)
def create_prompt(user_query):
B_INST, E_INST = "<s>[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = """\
You are an AI Medical Chatbot Assistant, provide comprehensive and informative responses to your inquiries.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""
SYSTEM_PROMPT = B_SYS + DEFAULT_SYSTEM_PROMPT + E_SYS
instruction = f"User asks: {user_query}\n"
prompt = B_INST + SYSTEM_PROMPT + instruction + E_INST
return prompt.strip()
def generate_text(model, tokenizer, prompt,
max_length=200,
temperature=0.8,
num_return_sequences=1):
prompt = create_prompt(user_query)
# Tokenize the prompt
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(device) # Move input_ids to the same device as the model
# Generate text
output = model.generate(
input_ids=input_ids,
max_length=max_length,
temperature=temperature,
num_return_sequences=num_return_sequences,
pad_token_id=tokenizer.eos_token_id, # Set pad token to end of sequence token
do_sample=True
)
# Decode the generated output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
# Split the generated text based on the prompt and take the portion after it
generated_text = generated_text.split(prompt)[-1].strip()
return generated_text
# Example usage
# - Context: First describe your problem.
# - Question: Then make the question.
user_query = "I'm a 35-year-old male experiencing symptoms like fatigue, increased sensitivity to cold, and dry, itchy skin. Could these be indicative of hypothyroidism?"
generated_text = generate_text(model, tokenizer, user_query)
print(generated_text)
```
the type of answer is :
```
Yes, it is possible. Hypothyroidism can present symptoms like increased sensitivity to cold, dry skin, and fatigue. These symptoms are characteristic of hypothyroidism. I recommend consulting with a healthcare provider. 2. Hypothyroidism can present symptoms like fever, increased sensitivity to cold, dry skin, and fatigue. These symptoms are characteristic of hypothyroidism.
```
**Important Note**
This model is intended for informational purposes only and should not be used as a substitute for professional medical advice. Always consult with a qualified healthcare provider for any medical concerns.
**License**
This model is distributed under the Apache License 2.0 (see LICENSE file for details).
**Contributing**
We welcome contributions to this repository! If you have improvements or suggestions, feel free to create a pull request.
**Disclaimer**
While we strive to provide informative responses, the accuracy of the model's outputs cannot be guaranteed. It is crucial to consult a doctor or other healthcare professional for definitive medical advice.
``` |
CiroN2022/cd-md-music | CiroN2022 | 2023-08-24T15:06:05Z | 2,535 | 3 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | 2023-08-24T15:05:56Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text:
---
# CD/MD Music

None
## Image examples for the model:









|
kyujinpy/Sakura-SOLAR-Instruct-DPO-v2 | kyujinpy | 2024-03-04T12:15:16Z | 2,535 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:argilla/distilabel-math-preference-dpo",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-24T16:11:41Z | ---
language:
- en
license: cc-by-nc-sa-4.0
datasets:
- argilla/distilabel-math-preference-dpo
pipeline_tag: text-generation
model-index:
- name: Sakura-SOLAR-Instruct-DPO-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.86
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.76
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2
name: Open LLM Leaderboard
---
# **Sakura-SOLAR-Instruct-DPO-v2**
<img src='./sakura.png' width=512>
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Method**
Using DPO method.
With [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).
I shared the information about my model. (training and code)
Please see: ⭐[Sakura-SOLAR](https://github.com/KyujinHan/Sakura-SOLAR-DPO).
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Sakura-SOLRCA-Instruct-DPO | 74.05 | 71.16 | 88.49 | 66.17 | 72.10 | 82.95 | 63.46 |
| Sakura-SOLAR-Instruct-DPO-v2 | 74.14 | 70.90 | 88.41 | 66.48 | 71.86 | 83.43 | 63.76 |
| [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | 74.40 | 70.99 | 88.42 | 66.33 | 71.79 | 83.66 | 65.20
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Sakura-SOLAR-Instruct-DPO-v2"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__Sakura-SOLAR-Instruct-DPO-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.14|
|AI2 Reasoning Challenge (25-Shot)|70.90|
|HellaSwag (10-Shot) |88.41|
|MMLU (5-Shot) |66.48|
|TruthfulQA (0-shot) |71.86|
|Winogrande (5-shot) |83.43|
|GSM8k (5-shot) |63.76|
|
TheBloke/Tess-10.7B-v1.5b-GGUF | TheBloke | 2024-01-28T20:20:45Z | 2,535 | 6 | transformers | [
"transformers",
"gguf",
"solar",
"base_model:migtissera/Tess-10.7B-v1.5b",
"license:apache-2.0",
"region:us"
] | null | 2024-01-28T20:01:42Z | ---
base_model: migtissera/Tess-10.7B-v1.5b
inference: false
license: apache-2.0
model_creator: Migel Tissera
model_name: Tess 10.7B V1.5B
model_type: solar
prompt_template: 'SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Tess 10.7B V1.5B - GGUF
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Tess 10.7B V1.5B](https://huggingface.co/migtissera/Tess-10.7B-v1.5b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Migel Tissera's Tess 10.7B V1.5B](https://huggingface.co/migtissera/Tess-10.7B-v1.5b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Tess-10.7B-v1.5b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tess-10.7b-v1.5b.Q2_K.gguf](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF/blob/main/tess-10.7b-v1.5b.Q2_K.gguf) | Q2_K | 2 | 4.00 GB| 6.50 GB | significant quality loss - not recommended for most purposes |
| [tess-10.7b-v1.5b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF/blob/main/tess-10.7b-v1.5b.Q3_K_S.gguf) | Q3_K_S | 3 | 4.66 GB| 7.16 GB | very small, high quality loss |
| [tess-10.7b-v1.5b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF/blob/main/tess-10.7b-v1.5b.Q3_K_M.gguf) | Q3_K_M | 3 | 5.20 GB| 7.70 GB | very small, high quality loss |
| [tess-10.7b-v1.5b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF/blob/main/tess-10.7b-v1.5b.Q3_K_L.gguf) | Q3_K_L | 3 | 5.65 GB| 8.15 GB | small, substantial quality loss |
| [tess-10.7b-v1.5b.Q4_0.gguf](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF/blob/main/tess-10.7b-v1.5b.Q4_0.gguf) | Q4_0 | 4 | 6.07 GB| 8.57 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tess-10.7b-v1.5b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF/blob/main/tess-10.7b-v1.5b.Q4_K_S.gguf) | Q4_K_S | 4 | 6.12 GB| 8.62 GB | small, greater quality loss |
| [tess-10.7b-v1.5b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF/blob/main/tess-10.7b-v1.5b.Q4_K_M.gguf) | Q4_K_M | 4 | 6.46 GB| 8.96 GB | medium, balanced quality - recommended |
| [tess-10.7b-v1.5b.Q5_0.gguf](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF/blob/main/tess-10.7b-v1.5b.Q5_0.gguf) | Q5_0 | 5 | 7.40 GB| 9.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tess-10.7b-v1.5b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF/blob/main/tess-10.7b-v1.5b.Q5_K_S.gguf) | Q5_K_S | 5 | 7.40 GB| 9.90 GB | large, low quality loss - recommended |
| [tess-10.7b-v1.5b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF/blob/main/tess-10.7b-v1.5b.Q5_K_M.gguf) | Q5_K_M | 5 | 7.60 GB| 10.10 GB | large, very low quality loss - recommended |
| [tess-10.7b-v1.5b.Q6_K.gguf](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF/blob/main/tess-10.7b-v1.5b.Q6_K.gguf) | Q6_K | 6 | 8.81 GB| 11.31 GB | very large, extremely low quality loss |
| [tess-10.7b-v1.5b.Q8_0.gguf](https://huggingface.co/TheBloke/Tess-10.7B-v1.5b-GGUF/blob/main/tess-10.7b-v1.5b.Q8_0.gguf) | Q8_0 | 8 | 11.40 GB| 13.90 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Tess-10.7B-v1.5b-GGUF and below it, a specific filename to download, such as: tess-10.7b-v1.5b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Tess-10.7B-v1.5b-GGUF tess-10.7b-v1.5b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Tess-10.7B-v1.5b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Tess-10.7B-v1.5b-GGUF tess-10.7b-v1.5b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m tess-10.7b-v1.5b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./tess-10.7b-v1.5b.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./tess-10.7b-v1.5b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Migel Tissera's Tess 10.7B V1.5B
<br>

<br>
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-10.7B-v1.5b was trained on the SOLAR-10.7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
<!-- original-model-card end -->
|
IlyaGusev/rut5_base_sum_gazeta | IlyaGusev | 2022-07-13T15:36:04Z | 2,534 | 10 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"ru",
"dataset:IlyaGusev/gazeta",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | 2022-03-02T23:29:04Z | ---
language:
- ru
tags:
- summarization
- t5
datasets:
- IlyaGusev/gazeta
license:
- apache-2.0
inference:
parameters:
no_repeat_ngram_size: 4
widget:
- text: "Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо."
example_title: "Википедия"
- text: "С 1 сентября в России вступают в силу поправки в закон «О банкротстве» — теперь должники смогут освобождаться от непосильных обязательств во внесудебном порядке, если сумма задолженности составляет не менее 50 тыс. рублей и не превышает 500 тыс. рублей без учета штрафов, пени, процентов за просрочку платежа и прочих имущественных или финансовых санкций. У физлиц и индивидуальных предпринимателей появилась возможность пройти процедуру банкротства без участия суда и финансового управляющего — достаточно подать соответствующее заявление через МФЦ. Сумму задолженности и список всех известных заявителю кредиторов нужно предоставить самостоятельно. Если все условия соблюдены, сведения внесут в Единый федеральный реестр в течение трех рабочих дней. При этом на момент подачи заявления в отношении заявителя должно быть окончено исполнительное производство с возвращением исполнительного документа взыскателю. Это значит, что у потенциального банкрота не должно быть имущества, которое можно взыскать. Кроме того, в отношении гражданина не должно быть возбуждено другое исполнительное производство. В период всей процедуры заявитель не сможет брать займы, кредиты, выдавать поручительства, совершать иные обеспечительные сделки. Внесудебное банкротство будет длиться шесть месяцев, в течение которых также будет действовать мораторий на удовлетворение требований кредиторов, отмеченных в заявлении должника, и мораторий об уплате обязательных платежей. Кроме того, прекращается начисление неустоек и иных финансовых санкций; имущественные взыскания (кроме алиментов) также будут приостановлены. По завершению процедуры заявителя освободят от дальнейшего выполнения требований кредиторов, указанных в заявлении о признании его банкротом, а эта задолженность признается безнадежной. В прошлом месяце стало известно, что за первое полугодие 2020 года российские суды признали банкротами 42,7 тыс. граждан (в том числе индивидуальных предпринимателей) — по данным единого реестра «Федресурс», это на 47,2% больше показателя аналогичного периода 2019 года. Рост числа обанкротившихся граждан во втором квартале по сравнению с первым замедлился — такая динамика обусловлена тем, что в период ограничений с 19 марта по 11 мая суды редко рассматривали банкротные дела компаний и меньше, чем обычно, в отношении граждан, объяснял руководитель проекта «Федресурс» Алексей Юхнин. Он прогнозирует, что во втором полугодии мы увидим рост показателя, когда суды рассмотрят все дела, что не смогли ранее в режиме ограничений. По его данным, уже в июне число личных банкротств выросло до 11,5 тыс., что в два раза превышает показатель аналогичного периода 2019 года."
example_title: "Новости"
- text: "Актуальность проблемы. Электронная информация играет все большую роль во всех сферах жизни современного общества. В последние годы объем научно-технической текстовой информации в электронном виде возрос настолько, что возникает угроза обесценивания этой информации в связи с трудностями поиска необходимых сведений среди множества доступных текстов. Развитие информационных ресурсов Интернет многократно усугубило проблему информационной перегрузки. В этой ситуации особенно актуальными становятся методы автоматизации реферирования текстовой информации, то есть методы получения сжатого представления текстовых документов–рефератов (аннотаций). Постановка проблемы автоматического реферирования текста и соответственно попытки ее решения с использованием различных подходов предпринимались многими исследователями. История применения вычислительной техники для реферирования насчитывает уже более 50 лет и связана с именами таких исследователей, как Г.П. Лун, В.Е. Берзон, И.П. Cевбо, Э.Ф. Скороходько, Д.Г. Лахути, Р.Г. Пиотровский и др. За эти годы выработаны многочисленные подходы к решению данной проблемы, которые достаточно четко подразделяются на два направления: автоматическое реферирование, основанное на экстрагировании из первичных документов с помощью определенных формальных признаков «наиболее информативных» фраз (фрагментов), совокупность которых образует некоторый экстракт; автоматическое реферирование, основанное на выделении из текстов с помощью специальных информационных языков наиболее существенной информации и порождении новых текстов (рефератов), содержательно обобщающих первичные документы."
example_title: "Научная статья"
---
# RuT5SumGazeta
## Model description
This is the model for abstractive summarization for Russian based on [rut5-base](https://huggingface.co/cointegrated/rut5-base).
## Intended uses & limitations
#### How to use
Colab: [link](https://colab.research.google.com/drive/1re5E26ZIDUpAx1gOCZkbF3hcwjozmgG0)
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "IlyaGusev/rut5_base_sum_gazeta"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
article_text = "..."
input_ids = tokenizer(
[article_text],
max_length=600,
add_special_tokens=True,
padding="max_length",
truncation=True,
return_tensors="pt"
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
no_repeat_ngram_size=4
)[0]
summary = tokenizer.decode(output_ids, skip_special_tokens=True)
print(summary)
```
## Training data
- Dataset: [Gazeta](https://huggingface.co/datasets/IlyaGusev/gazeta)
## Training procedure
- Training script: [train.py](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/train.py)
- Config: [t5_training_config.json](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/configs/t5_training_config.json)
## Eval results
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v1 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **32.4** | 14.3 | 28.0 | 39.7 | **26.4** | 12.1 | 371 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 32.2 | **14.4** | **28.1** | **39.8** | 25.7 | **12.3** | 330 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 26.2 | 7.7 | 21.7 | 33.8 | 18.2 | 4.3 | 244 |
* Train dataset: **Gazeta v1 train**
* Test dataset: **Gazeta v2 test**
* Source max_length: **600**
* Target max_length: **200**
* no_repeat_ngram_size: **4**
* num_beams: **5**
| Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length |
|:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----|
| [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **28.7** | **11.1** | 24.4 | **37.3** | **22.7** | **9.4** | 373 |
| [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 28.6 | **11.1** | **24.5** | 37.2 | 22.0 | **9.4** | 331 |
| [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 24.1 | 6.5 | 19.8 | 32.1 | 16.3 | 3.6 | 242 |
Predicting all summaries:
```python
import json
import torch
from transformers import AutoTokenizer, T5ForConditionalGeneration
from datasets import load_dataset
def gen_batch(inputs, batch_size):
batch_start = 0
while batch_start < len(inputs):
yield inputs[batch_start: batch_start + batch_size]
batch_start += batch_size
def predict(
model_name,
input_records,
output_file,
max_source_tokens_count=600,
batch_size=8
):
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name).to(device)
predictions = []
for batch in gen_batch(input_records, batch_size):
texts = [r["text"] for r in batch]
input_ids = tokenizer(
texts,
add_special_tokens=True,
max_length=max_source_tokens_count,
padding="max_length",
truncation=True,
return_tensors="pt"
)["input_ids"].to(device)
output_ids = model.generate(
input_ids=input_ids,
no_repeat_ngram_size=4
)
summaries = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
for s in summaries:
print(s)
predictions.extend(summaries)
with open(output_file, "w") as w:
for p in predictions:
w.write(p.strip().replace("\n", " ") + "\n")
gazeta_test = load_dataset('IlyaGusev/gazeta', script_version="v1.0")["test"]
predict("IlyaGusev/rut5_base_sum_gazeta", list(gazeta_test), "t5_predictions.txt")
```
Evaluation script: [evaluate.py](https://github.com/IlyaGusev/summarus/blob/master/evaluate.py)
Flags: --language ru --tokenize-after --lower
|
timm/ViT-SO400M-14-SigLIP | timm | 2023-10-25T21:53:00Z | 2,534 | 12 | open_clip | [
"open_clip",
"safetensors",
"clip",
"siglip",
"zero-shot-image-classification",
"dataset:webli",
"arxiv:2303.15343",
"license:apache-2.0",
"region:us"
] | zero-shot-image-classification | 2023-10-16T23:47:35Z | ---
tags:
- clip
- siglip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: apache-2.0
datasets:
- webli
---
# Model card for ViT-SO400M-14-SigLIP
A SigLIP (Sigmoid loss for Language-Image Pre-training) model trained on WebLI.
This model has been converted to PyTorch from the original JAX checkpoints in [Big Vision](https://github.com/google-research/big_vision). These weights are usable in both OpenCLIP (image + text) and timm (image only).
## Model Details
- **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
- **Original:** https://github.com/google-research/big_vision
- **Dataset:** WebLI
- **Papers:**
- Sigmoid loss for language image pre-training: https://arxiv.org/abs/2303.15343
## Model Usage
### With OpenCLIP
```
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer # works on open-clip-torch>=2.23.0, timm>=0.9.8
model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-SO400M-14-SigLIP')
tokenizer = get_tokenizer('hf-hub:timm/ViT-SO400M-14-SigLIP')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
labels_list = ["a dog", "a cat", "a donut", "a beignet"]
text = tokenizer(labels_list, context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
print("Label probabilities: ", zipped_list)
```
### With `timm` (for image embeddings)
```python
from urllib.request import urlopen
from PIL import Image
import timm
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_so400m_patch14_siglip_224',
pretrained=True,
num_classes=0,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(image).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
```
## Citation
```bibtex
@article{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
journal={arXiv preprint arXiv:2303.15343},
year={2023}
}
```
```bibtex
@misc{big_vision,
author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander},
title = {Big Vision},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/google-research/big_vision}}
}
```
|
RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf | RichardErkhov | 2024-06-16T05:56:07Z | 2,534 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-16T02:11:52Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-8B-Instruct-abliterated-v2 - GGUF
- Model creator: https://huggingface.co/cognitivecomputations/
- Original model: https://huggingface.co/cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3-8B-Instruct-abliterated-v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q2_K.gguf) | Q2_K | 2.96GB |
| [Llama-3-8B-Instruct-abliterated-v2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Llama-3-8B-Instruct-abliterated-v2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Llama-3-8B-Instruct-abliterated-v2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q3_K.gguf) | Q3_K | 3.74GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Llama-3-8B-Instruct-abliterated-v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Llama-3-8B-Instruct-abliterated-v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q4_K.gguf) | Q4_K | 4.58GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q5_K.gguf) | Q5_K | 5.34GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q6_K.gguf) | Q6_K | 6.14GB |
| [Llama-3-8B-Instruct-abliterated-v2.Q8_0.gguf](https://huggingface.co/RichardErkhov/cognitivecomputations_-_Llama-3-8B-Instruct-abliterated-v2-gguf/blob/main/Llama-3-8B-Instruct-abliterated-v2.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
license: llama3
---
# Model Card for Llama-3-8B-Instruct-abliterated-v2
## Overview
This model card describes the Llama-3-8B-Instruct-abliterated-v2 model, which is an orthogonalized version of the meta-llama/Llama-3-8B-Instruct model, and an improvement upon the previous generation Llama-3-8B-Instruct-abliterated. This variant has had certain weights manipulated to inhibit the model's ability to express refusal.
[Join the Cognitive Computations Discord!](https://discord.gg/cognitivecomputations)
## Details
* The model was trained with more data to better pinpoint the "refusal direction".
* This model is MUCH better at directly and succinctly answering requests without producing even so much as disclaimers.
## Methodology
The methodology used to generate this model is described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)'
## Quirks and Side Effects
This model may come with interesting quirks, as the methodology is still new and untested. The code used to generate the model is available in the Python notebook [ortho_cookbook.ipynb](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb).
Please note that the model may still refuse to answer certain requests, even after the weights have been manipulated to inhibit refusal.
## Availability
## How to Use
This model is available for use in the Transformers library.
GGUF Quants are available [here](https://huggingface.co/failspy/Llama-3-8B-Instruct-abliterated-v2-GGUF).
|
RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf | RichardErkhov | 2024-06-22T23:23:34Z | 2,533 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-22T19:05:24Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
UNfilteredAI-1B - GGUF
- Model creator: https://huggingface.co/UnfilteredAI/
- Original model: https://huggingface.co/UnfilteredAI/UNfilteredAI-1B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [UNfilteredAI-1B.Q2_K.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q2_K.gguf) | Q2_K | 0.39GB |
| [UNfilteredAI-1B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.IQ3_XS.gguf) | IQ3_XS | 0.43GB |
| [UNfilteredAI-1B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.IQ3_S.gguf) | IQ3_S | 0.45GB |
| [UNfilteredAI-1B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q3_K_S.gguf) | Q3_K_S | 0.45GB |
| [UNfilteredAI-1B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.IQ3_M.gguf) | IQ3_M | 0.46GB |
| [UNfilteredAI-1B.Q3_K.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q3_K.gguf) | Q3_K | 0.49GB |
| [UNfilteredAI-1B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q3_K_M.gguf) | Q3_K_M | 0.49GB |
| [UNfilteredAI-1B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q3_K_L.gguf) | Q3_K_L | 0.53GB |
| [UNfilteredAI-1B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.IQ4_XS.gguf) | IQ4_XS | 0.55GB |
| [UNfilteredAI-1B.Q4_0.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q4_0.gguf) | Q4_0 | 0.57GB |
| [UNfilteredAI-1B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.IQ4_NL.gguf) | IQ4_NL | 0.57GB |
| [UNfilteredAI-1B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q4_K_S.gguf) | Q4_K_S | 0.57GB |
| [UNfilteredAI-1B.Q4_K.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q4_K.gguf) | Q4_K | 0.6GB |
| [UNfilteredAI-1B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q4_K_M.gguf) | Q4_K_M | 0.6GB |
| [UNfilteredAI-1B.Q4_1.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q4_1.gguf) | Q4_1 | 0.63GB |
| [UNfilteredAI-1B.Q5_0.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q5_0.gguf) | Q5_0 | 0.69GB |
| [UNfilteredAI-1B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q5_K_S.gguf) | Q5_K_S | 0.69GB |
| [UNfilteredAI-1B.Q5_K.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q5_K.gguf) | Q5_K | 0.7GB |
| [UNfilteredAI-1B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q5_K_M.gguf) | Q5_K_M | 0.7GB |
| [UNfilteredAI-1B.Q5_1.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q5_1.gguf) | Q5_1 | 0.74GB |
| [UNfilteredAI-1B.Q6_K.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q6_K.gguf) | Q6_K | 0.81GB |
| [UNfilteredAI-1B.Q8_0.gguf](https://huggingface.co/RichardErkhov/UnfilteredAI_-_UNfilteredAI-1B-gguf/blob/main/UNfilteredAI-1B.Q8_0.gguf) | Q8_0 | 1.05GB |
Original model description:
---
license: other
language:
- en
tags:
- UnfilteredAI
---
# UNfilteredAI-1B
**Model Name**: UNfilteredAI-1B
**Model Type**: Text Generation
## About the Model
The UNfilteredAI-1B model is a large-scale text generation model developed by UnfilteredAI. This model is designed to push the boundaries of creativity and innovation in AI-generated content, without the constraints of traditional content moderation or filtering.
## Key Features
- **Uncensored and Unrestricted**: The UNfilteredAI-1B model is specifically engineered to generate text without any content restrictions or limitations. This allows for the exploration of a wide range of topics and styles, including potentially controversial or sensitive subject matter.
- **Extensive Training**: The model has been trained on a vast corpus of diverse textual data, enabling it to generate highly coherent and contextually relevant content across a broad range of domains.
- **Versatile Applications**: The UNfilteredAI-1B model can be utilized for a variety of text-based tasks, such as creative writing, conversational AI, and even educational or research-oriented applications.
## Intended Use
The UNfilteredAI-1B model is intended for use by experienced and responsible AI researchers, developers, and enthusiasts who are interested in pushing the boundaries of language generation and exploring the potential of uncensored AI technologies.
## Limitations and Ethical Considerations
- **Potential for Misuse**: The uncensored nature of the UNfilteredAI-1B model means that it could be used to generate harmful, unethical, or illegal content. Users should exercise caution and responsibility when utilizing this model.
- **Bias and Inconsistency**: As with many large language models, the UNfilteredAI-1B model may exhibit biases and inconsistencies in its outputs, which could lead to the generation of inaccurate, inappropriate, or even offensive content.
- **Sensitive Content**: The model is capable of generating explicit, adult-oriented, or otherwise sensitive content. Users should be aware of the potential risks and ensure that the model is used in an appropriate and ethical manner.
UnfilteredAI acknowledges the significant ethical considerations and potential risks associated with the development and deployment of uncensored AI models. We encourage users to engage with this model responsibly and to be mindful of the potential impact of their actions.
|
mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF | mradermacher | 2024-06-14T10:21:14Z | 2,532 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"axolotl",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:cognitivecomputations/dolphin-2.9.3-qwen2-1.5b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-14T10:06:15Z | ---
base_model: cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
- axolotl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-1.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-1.5b.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
cointegrated/roberta-large-cola-krishna2020 | cointegrated | 2023-06-13T09:38:15Z | 2,530 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"arxiv:2010.05700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | This is a RoBERTa-large classifier trained on the CoLA corpus [Warstadt et al., 2019](https://www.mitpressjournals.org/doi/pdf/10.1162/tacl_a_00290),
which contains sentences paired with grammatical acceptability judgments. The model can be used to evaluate fluency of machine-generated English sentences, e.g. for evaluation of text style transfer.
The model was trained in the paper [Krishna et al, 2020. Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700), and its original version is available at [their project page](http://style.cs.umass.edu). We converted this model from Fairseq to Transformers format. All credit goes to the authors of the original paper.
## Citation
If you found this model useful and refer to it, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
``` |
uer/roberta-base-finetuned-dianping-chinese | uer | 2023-10-17T15:19:16Z | 2,529 | 37 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"zh",
"arxiv:1909.05658",
"arxiv:2212.06385",
"arxiv:1708.02657",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: zh
widget:
- text: "这本书真的很不错"
---
# Chinese RoBERTa-Base Models for Text Classification
## Model description
This is the set of 5 Chinese RoBERTa-Base classification models fine-tuned by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the models could also be fine-tuned by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
You can download the 5 Chinese RoBERTa-Base classification models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| Dataset | Link |
| :-----------: | :-------------------------------------------------------: |
| **JD full** | [**roberta-base-finetuned-jd-full-chinese**][jd_full] |
| **JD binary** | [**roberta-base-finetuned-jd-binary-chinese**][jd_binary] |
| **Dianping** | [**roberta-base-finetuned-dianping-chinese**][dianping] |
| **Ifeng** | [**roberta-base-finetuned-ifeng-chinese**][ifeng] |
| **Chinanews** | [**roberta-base-finetuned-chinanews-chinese**][chinanews] |
## How to use
You can use this model directly with a pipeline for text classification (take the case of roberta-base-finetuned-chinanews-chinese):
```python
>>> from transformers import AutoModelForSequenceClassification,AutoTokenizer,pipeline
>>> model = AutoModelForSequenceClassification.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-finetuned-chinanews-chinese')
>>> text_classification = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
>>> text_classification("北京上个月召开了两会")
[{'label': 'mainland China politics', 'score': 0.7211663722991943}]
```
## Training data
5 Chinese text classification datasets are used. JD full, JD binary, and Dianping datasets consist of user reviews of different sentiment polarities. Ifeng and Chinanews consist of first paragraphs of news articles of different topic classes. They are collected by [Glyph](https://github.com/zhangxiangxiao/glyph) project and more details are discussed in the corresponding [paper](https://arxiv.org/abs/1708.02657).
## Training procedure
Models are fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune three epochs with a sequence length of 512 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved. We use the same hyper-parameters on different models.
Taking the case of roberta-base-finetuned-chinanews-chinese
```
python3 finetune/run_classifier.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
--vocab_path models/google_zh_vocab.txt \
--train_path datasets/glyph/chinanews/train.tsv \
--dev_path datasets/glyph/chinanews/dev.tsv \
--output_model_path models/chinanews_classifier_model.bin \
--learning_rate 3e-5 --epochs_num 3 --batch_size 32 --seq_length 512
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_text_classification_from_uer_to_huggingface.py --input_model_path models/chinanews_classifier_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{zhang2017encoding,
title={Which encoding is the best for text classification in chinese, english, japanese and korean?},
author={Zhang, Xiang and LeCun, Yann},
journal={arXiv preprint arXiv:1708.02657},
year={2017}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
@article{zhao2023tencentpretrain,
title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
journal={ACL 2023},
pages={217},
year={2023}
```
[jd_full]:https://huggingface.co/uer/roberta-base-finetuned-jd-full-chinese
[jd_binary]:https://huggingface.co/uer/roberta-base-finetuned-jd-binary-chinese
[dianping]:https://huggingface.co/uer/roberta-base-finetuned-dianping-chinese
[ifeng]:https://huggingface.co/uer/roberta-base-finetuned-ifeng-chinese
[chinanews]:https://huggingface.co/uer/roberta-base-finetuned-chinanews-chinese |
RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf | RichardErkhov | 2024-06-29T15:17:45Z | 2,529 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-29T14:06:45Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLLaMA-1.1B-OrcaPlatty - GGUF
- Model creator: https://huggingface.co/marcchew/
- Original model: https://huggingface.co/marcchew/TinyLLaMA-1.1B-OrcaPlatty/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyLLaMA-1.1B-OrcaPlatty.Q2_K.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q2_K.gguf) | Q2_K | 0.4GB |
| [TinyLLaMA-1.1B-OrcaPlatty.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [TinyLLaMA-1.1B-OrcaPlatty.IQ3_S.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [TinyLLaMA-1.1B-OrcaPlatty.IQ3_M.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q3_K.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q3_K.gguf) | Q3_K | 0.51GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [TinyLLaMA-1.1B-OrcaPlatty.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q4_0.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q4_0.gguf) | Q4_0 | 0.59GB |
| [TinyLLaMA-1.1B-OrcaPlatty.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q4_K.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q4_K.gguf) | Q4_K | 0.62GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q4_1.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q4_1.gguf) | Q4_1 | 0.65GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q5_0.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q5_0.gguf) | Q5_0 | 0.71GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q5_K.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q5_K.gguf) | Q5_K | 0.73GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q5_1.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q5_1.gguf) | Q5_1 | 0.77GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q6_K.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q6_K.gguf) | Q6_K | 0.84GB |
| [TinyLLaMA-1.1B-OrcaPlatty.Q8_0.gguf](https://huggingface.co/RichardErkhov/marcchew_-_TinyLLaMA-1.1B-OrcaPlatty-gguf/blob/main/TinyLLaMA-1.1B-OrcaPlatty.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: apache-2.0
base_model: jeff31415/TinyLlama-1.1B-1T-OpenOrca
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [jeff31415/TinyLlama-1.1B-1T-OpenOrca](https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-07
- train_batch_size: 20
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1726 | 0.03 | 8 | 2.3170 |
| 2.1444 | 0.05 | 16 | 2.2937 |
| 2.1036 | 0.08 | 24 | 2.2707 |
| 2.0703 | 0.1 | 32 | 2.2478 |
| 2.0604 | 0.13 | 40 | 2.2248 |
| 2.046 | 0.15 | 48 | 2.2013 |
| 1.9919 | 0.18 | 56 | 2.1780 |
| 1.9842 | 0.21 | 64 | 2.1547 |
| 1.9234 | 0.23 | 72 | 2.1320 |
| 1.9235 | 0.26 | 80 | 2.1099 |
| 1.9096 | 0.28 | 88 | 2.0884 |
| 1.8722 | 0.31 | 96 | 2.0679 |
| 1.8594 | 0.34 | 104 | 2.0479 |
| 1.8438 | 0.36 | 112 | 2.0283 |
| 1.7581 | 0.39 | 120 | 2.0089 |
| 1.7852 | 0.41 | 128 | 1.9901 |
| 1.7634 | 0.44 | 136 | 1.9714 |
| 1.7296 | 0.46 | 144 | 1.9531 |
| 1.6976 | 0.49 | 152 | 1.9353 |
| 1.6861 | 0.52 | 160 | 1.9173 |
| 1.6683 | 0.54 | 168 | 1.8993 |
| 1.6255 | 0.57 | 176 | 1.8826 |
| 1.619 | 0.59 | 184 | 1.8673 |
| 1.6455 | 0.62 | 192 | 1.8534 |
| 1.5784 | 0.65 | 200 | 1.8399 |
| 1.6078 | 0.67 | 208 | 1.8259 |
| 1.5703 | 0.7 | 216 | 1.8124 |
| 1.5215 | 0.72 | 224 | 1.7989 |
| 1.542 | 0.75 | 232 | 1.7852 |
| 1.5147 | 0.77 | 240 | 1.7721 |
| 1.5092 | 0.8 | 248 | 1.7589 |
| 1.4564 | 0.83 | 256 | 1.7456 |
| 1.4985 | 0.85 | 264 | 1.7324 |
| 1.4505 | 0.88 | 272 | 1.7189 |
| 1.4447 | 0.9 | 280 | 1.7052 |
| 1.4436 | 0.93 | 288 | 1.6924 |
| 1.4132 | 0.95 | 296 | 1.6799 |
| 1.3791 | 0.98 | 304 | 1.6680 |
| 1.3877 | 1.01 | 312 | 1.6565 |
| 1.3807 | 1.03 | 320 | 1.6453 |
| 1.3391 | 1.06 | 328 | 1.6352 |
| 1.3232 | 1.08 | 336 | 1.6251 |
| 1.3293 | 1.11 | 344 | 1.6159 |
| 1.3029 | 1.14 | 352 | 1.6074 |
| 1.3173 | 1.16 | 360 | 1.5992 |
| 1.3006 | 1.19 | 368 | 1.5926 |
| 1.2547 | 1.21 | 376 | 1.5863 |
| 1.2704 | 1.24 | 384 | 1.5805 |
| 1.2964 | 1.26 | 392 | 1.5749 |
| 1.277 | 1.29 | 400 | 1.5695 |
| 1.2718 | 1.32 | 408 | 1.5657 |
| 1.2379 | 1.34 | 416 | 1.5619 |
| 1.2746 | 1.37 | 424 | 1.5585 |
| 1.2349 | 1.39 | 432 | 1.5559 |
| 1.2264 | 1.42 | 440 | 1.5531 |
| 1.2365 | 1.45 | 448 | 1.5505 |
| 1.2242 | 1.47 | 456 | 1.5484 |
| 1.2094 | 1.5 | 464 | 1.5462 |
| 1.2196 | 1.52 | 472 | 1.5444 |
| 1.2447 | 1.55 | 480 | 1.5426 |
| 1.2127 | 1.57 | 488 | 1.5407 |
| 1.2278 | 1.6 | 496 | 1.5391 |
| 1.2089 | 1.63 | 504 | 1.5377 |
| 1.2069 | 1.65 | 512 | 1.5361 |
| 1.2264 | 1.68 | 520 | 1.5350 |
| 1.2027 | 1.7 | 528 | 1.5338 |
| 1.2138 | 1.73 | 536 | 1.5325 |
| 1.207 | 1.75 | 544 | 1.5313 |
| 1.2155 | 1.78 | 552 | 1.5304 |
| 1.2192 | 1.81 | 560 | 1.5295 |
| 1.2223 | 1.83 | 568 | 1.5287 |
| 1.2281 | 1.86 | 576 | 1.5278 |
| 1.1977 | 1.88 | 584 | 1.5269 |
| 1.2101 | 1.91 | 592 | 1.5261 |
| 1.2099 | 1.94 | 600 | 1.5254 |
| 1.1873 | 1.96 | 608 | 1.5245 |
| 1.204 | 1.99 | 616 | 1.5242 |
| 1.21 | 2.01 | 624 | 1.5239 |
| 1.242 | 2.04 | 632 | 1.5231 |
| 1.1696 | 2.06 | 640 | 1.5224 |
| 1.1803 | 2.09 | 648 | 1.5218 |
| 1.1692 | 2.12 | 656 | 1.5213 |
| 1.212 | 2.14 | 664 | 1.5208 |
| 1.1977 | 2.17 | 672 | 1.5204 |
| 1.187 | 2.19 | 680 | 1.5201 |
| 1.1858 | 2.22 | 688 | 1.5199 |
| 1.1824 | 2.25 | 696 | 1.5194 |
| 1.1914 | 2.27 | 704 | 1.5190 |
| 1.1815 | 2.3 | 712 | 1.5187 |
| 1.2021 | 2.32 | 720 | 1.5184 |
| 1.1872 | 2.35 | 728 | 1.5181 |
| 1.1901 | 2.37 | 736 | 1.5178 |
| 1.1933 | 2.4 | 744 | 1.5177 |
| 1.1773 | 2.43 | 752 | 1.5175 |
| 1.1935 | 2.45 | 760 | 1.5172 |
| 1.2118 | 2.48 | 768 | 1.5170 |
| 1.1816 | 2.5 | 776 | 1.5169 |
| 1.1842 | 2.53 | 784 | 1.5167 |
| 1.1891 | 2.55 | 792 | 1.5165 |
| 1.1883 | 2.58 | 800 | 1.5164 |
| 1.1506 | 2.61 | 808 | 1.5163 |
| 1.1708 | 2.63 | 816 | 1.5162 |
| 1.1944 | 2.66 | 824 | 1.5160 |
| 1.1575 | 2.68 | 832 | 1.5159 |
| 1.1698 | 2.71 | 840 | 1.5160 |
| 1.1525 | 2.74 | 848 | 1.5158 |
| 1.1767 | 2.76 | 856 | 1.5157 |
| 1.1943 | 2.79 | 864 | 1.5158 |
| 1.1727 | 2.81 | 872 | 1.5157 |
| 1.195 | 2.84 | 880 | 1.5157 |
| 1.1771 | 2.86 | 888 | 1.5157 |
| 1.1731 | 2.89 | 896 | 1.5156 |
| 1.191 | 2.92 | 904 | 1.5157 |
| 1.1903 | 2.94 | 912 | 1.5156 |
| 1.1821 | 2.97 | 920 | 1.5156 |
| 1.2 | 2.99 | 928 | 1.5156 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.1.dev0
- Tokenizers 0.15.0
|
maywell/TinyWand-SFT | maywell | 2024-01-07T00:19:01Z | 2,528 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-03T14:48:55Z | ---
license: apache-2.0
---
# **TinyWand-SFT**
<p align="left">
<img src="./TinyWand.png" width="150"/>
<p>
# **한국어 모델 설명**
**1.63B, 하찮은 크기의 SLM은 어떨까요?**
## **모델 소개**
**TinyWand-SFT**는 1.63B의 SLM 모델입니다. 이 모델은 1.63B라는 작은 크기를 가짐으로써 소형기기에서 구동되거나 큰 toks/s를 가질 수 있음과 동시에 강력한 성능을 보여줍니다.
## **모델 라이센스**
apache-2.0
## **모델 성능**
TBD
### 한계
작은 크기로 인하여 Insturct 파인튜닝 후 해당 템플릿이 아닐경우 제대로 응답하지 않는 모습을 보임. 특정 task에 사용한다면 프롬프팅보다는 파인튜닝을 권장함.
같은 이유로 일반적인 벤치마크에서도 상당히 낮은 점수를 보임.
## **학습 과정**
TBD
## **사용 안내**
**추론에 필요한 VRAM**
| 양자화 | 입력 토큰 수 | 출력 토큰 수 | 메모리 사용량 |
|---|---|---|---|
| bf16(base) | 64 | 256 | 3,888 MiB |
| q4_K_M | 64 | 256 | 1,788 MiB |
**프롬프트 템플릿**
본 모델은 Alpaca 프롬프트 템플릿을 사용합니다.
해당 템플릿은 `apply_chat_template()`를 통해 [허깅페이스 템플릿](https://huggingface.co/docs/transformers/main/chat_templating)에서 확인 하실 수 있습니다.
**아래 파이썬 코드를 사용하여 모델을 로드 및 사용 할 수 있습니다.**
*transformers, torch가 사전 설치되어야함*
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # nvidia 그래픽카드 기준
tokenizer = AutoTokenizer.from_pretrained("maywell/TinyWand-SFT")
model = AutoModelForCausalLM.from_pretrained(
"maywell/TinyWand-SFT",
device_map="auto",
torch_dtype=torch.bfloat16, # 사용하는 장비가 bfloat16을 지원하지 않는 경우 torch.float16으로 바꿔주세요.
)
messages = [
{"role": "system", "content": "Below is an instruction that describes a task. Write a response that appropriately completes the request."}, # 비울 경우에도 동일하게 적용 됨.
{"role": "user", "content": "언어모델의 파라미터 수가 작으면 어떤 이점이 있어?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
``` |
kodonho/Solar-OrcaDPO-Solar-Instruct-SLERP | kodonho | 2024-03-05T10:46:47Z | 2,528 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-12T01:25:43Z | ---
license: cc-by-nc-4.0
tags:
- mergekit
- merge
---
# Solar based model with gradient slerp
This is an English mixed Model based on
* [upstage/SOLAR-10.7B-Instruct-v1.0]
* [bhavinjawade/SOLAR-10B-OrcaDPO-Jawade]
# Avg. 74.3
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "kodonho/Solar-OrcaDPO-Solar-Instruct-SLERP"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
CPU example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "kodonho/Solar-OrcaDPO-Solar-Instruct-SLERP"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='cpu'
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
|
Jayant9928/tnayajv2.0 | Jayant9928 | 2024-04-26T15:13:36Z | 2,528 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-26T12:51:16Z | ---
license: apache-2.0
---
The tnayaj-8B model is an innovative open-source language model specifically engineered for the biomedical domain. Crafted by Jayant AI Labs, this model harnesses state-of-the-art methodologies to achieve unparalleled performance across various biomedical tasks.
🏥 Specialization in medicine: tnayaj-8B caters to the intricate linguistic and informational demands of the medical and life sciences realms. Its refinement stems from extensive training on a comprehensive biomedical dataset, enabling precise and articulate text generation within the domain.
🎓 Exceptional Performance: Boasting a staggering 8 billion parameters 🧠 Advanced Training Methodologies: tnayaj-8B builds upon the foundational prowess of the Meta-Llama-3-8B-Instruct .It integrates the DPO dataset and a tailored array of medical instruction data for refinement. Central to its training regimen are meticulously curated components, including:
---
license: apache-2.0
--- |
ZeroWw/Phi-3-mini-4k-geminified-GGUF | ZeroWw | 2024-07-01T02:09:10Z | 2,528 | 0 | null | [
"gguf",
"en",
"license:mit",
"region:us"
] | null | 2024-07-01T02:02:25Z |
---
license: mit
language:
- en
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
|
alexm-nm/tinyllama-24-marlin24-8bit-channelwise | alexm-nm | 2024-05-08T16:39:42Z | 2,527 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"gptq",
"region:us"
] | text-generation | 2024-05-08T16:32:38Z | ---
license: apache-2.0
---
|
ik28/MedMistral-instruct | ik28 | 2024-05-24T13:20:58Z | 2,526 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-24T11:37:38Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Saber-LM-GGUF | mradermacher | 2024-06-06T18:17:36Z | 2,524 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:bunnycore/Saber-LM",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-06-06T16:15:47Z | ---
base_model: bunnycore/Saber-LM
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Saber-LM
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Saber-LM-GGUF/resolve/main/Saber-LM.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/L3-70B-Euryale-v2.1-i1-GGUF | mradermacher | 2024-06-14T04:06:20Z | 2,524 | 11 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/L3-70B-Euryale-v2.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-13T16:22:17Z | ---
base_model: Sao10K/L3-70B-Euryale-v2.1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF/resolve/main/L3-70B-Euryale-v2.1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Envvi/Inkpunk-Diffusion | Envvi | 2022-11-29T16:31:21Z | 2,523 | 974 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-11-25T06:06:18Z | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- text-to-image
- diffusers
---
# Inkpunk Diffusion
Finetuned Stable Diffusion model trained on dreambooth. Vaguely inspired by Gorillaz, FLCL, and Yoji Shinkawa. Use **_nvinkpunk_** in your prompts.
# Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Inkpunk-Diffusion:
[](https://huggingface.co/spaces/akhaliq/Inkpunk-Diffusion)
# Sample images

 |
manupande21/GPT2_PMC | manupande21 | 2024-05-13T05:43:05Z | 2,523 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-12T15:34:40Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: answer_logs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# answer_logs
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on a set of around 8000 .questions and answers generated from pubmed central open access research papers
## Model description
Finetuned gpt2
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
aisquared/dlite-v2-1_5b | aisquared | 2024-03-28T18:15:25Z | 2,522 | 13 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:aisquared/databricks-dolly-15k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-04-16T03:38:49Z | ---
license: apache-2.0
datasets:
- aisquared/databricks-dolly-15k
language:
- en
library_name: transformers
---
# Model Card for `dlite-v2-1.5b`
<!-- Provide a quick summary of what the model is/does. -->
AI Squared's `dlite-v2-1.5b` is a large language
model which is derived from OpenAI's large [GPT-2](https://huggingface.co/gpt2-large) model and fine-tuned on a corpus of 15k records
([Databricks' "Dolly 15k" Dataset](https://huggingface.co/datasets/aisquared/databricks-dolly-15k)) to help it exhibit chat-based capabilities.
Just like [Databricks' Dolly V2 models](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm),
`dlite-v2-1.5b` (and all other members of the `dlite-v2` family) is licensed for both **research and commercial use.** We are extremely grateful
for the work that Databricks has done to create the `databricks-dolly-15k` dataset, for without it we would not be able to create and release this
model under such an open and permissive license.
While `dlite-v2-1.5b` is **not a state-of-the-art model**, we believe that the level of interactivity that can be achieved on such a small model that is trained so cheaply
is important to showcase, as it continues to demonstrate that creating powerful AI capabilities may be much more accessible than previously thought.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** AI Squared, Inc.
- **Shared by:** AI Squared, Inc.
- **Model type:** Large Language Model
- **Language(s) (NLP):** EN
- **License:** Apache v2.0
- **Finetuned from model:** GPT-2
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
**`dlite-v2-1.5b` is not a state-of-the-art language model.** `dlite-v2-1.5b` is an experimental technology, and as with any experimental technology,
AI Squared urges potential users of this technology to test its capabilities thoroughly before usage.
Furthermore, the model can sometimes exhibit undesired behaviors. Some of these behaviors include,
but are not limited to: factual inaccuracies, biases, offensive responses, toxicity, and hallucinations.
Just as with any other LLM, we advise users of this technology to exercise good judgment when applying this technology.
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
From your terminal, run:
```python
pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"
```
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
found in the model repo [here](https://huggingface.co/aisquared/dlite-v2-1_5b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
It is also fine to remove it if there is sufficient memory.
```python
from transformers import pipeline
import torch
generate_text = pipeline(model="aisquared/dlite-v2-1_5b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
```
You can then use the pipeline to answer instructions:
```python
res = generate_text("Who was George Washington?")
print(res)
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/aisquared/dlite-v2-1_5b/blob/main/instruct_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("aisquared/dlite-v2-1_5b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("aisquared/dlite-v2-1_5b", device_map="auto", torch_dtype=torch.bfloat16)
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
```
### Model Performance Metrics
We present the results from various model benchmarks on the EleutherAI LLM Evaluation Harness for all models in the DLite family.
Model results are sorted by mean score, ascending, to provide an ordering. These metrics serve to further show that none of the DLite models are
state of the art, but rather further show that chat-like behaviors in LLMs can be trained almost independent of model size.
| Model | arc_challenge | arc_easy | boolq | hellaswag | openbookqa | piqa | winogrande |
|:--------------|----------------:|-----------:|---------:|------------:|-------------:|---------:|-------------:|
| dlite-v2-124m | 0.199659 | 0.447811 | 0.494801 | 0.291675 | 0.156 | 0.620239 | 0.487766 |
| gpt2 | 0.190273 | 0.438131 | 0.487156 | 0.289185 | 0.164 | 0.628945 | 0.51618 |
| dlite-v1-124m | 0.223549 | 0.462542 | 0.502446 | 0.293268 | 0.17 | 0.622416 | 0.494081 |
| gpt2-medium | 0.215017 | 0.490741 | 0.585933 | 0.333101 | 0.186 | 0.676279 | 0.531176 |
| dlite-v2-355m | 0.251706 | 0.486111 | 0.547401 | 0.344354 | 0.216 | 0.671926 | 0.52723 |
| dlite-v1-355m | 0.234642 | 0.507576 | 0.600306 | 0.338478 | 0.216 | 0.664309 | 0.496448 |
| gpt2-large | 0.216724 | 0.531566 | 0.604893 | 0.363971 | 0.194 | 0.703482 | 0.553275 |
| dlite-v1-774m | 0.250853 | 0.545875 | 0.614985 | 0.375124 | 0.218 | 0.698041 | 0.562747 |
| dlite-v2-774m | 0.269625 | 0.52904 | 0.613761 | 0.395937 | 0.256 | 0.691513 | 0.566693 |
| gpt2-xl | 0.25 | 0.582912 | 0.617737 | 0.400418 | 0.224 | 0.708379 | 0.583268 |
| dlite-v1-1_5b | 0.268771 | 0.588384 | 0.624159 | 0.401414 | 0.226 | 0.708379 | 0.584846 |
| dlite-v2-1_5b | 0.289249 | 0.565657 | 0.601223 | 0.434077 | 0.272 | 0.703482 | 0.588003 |
### Limitations
*DLite is an experimental technology and is not designed for use in any environment without significant testing and safety consideration.
Furthermore, the model can sometimes exhibit undesired behaviors. Some of these behaviors include, but are not limited to: factual
inaccuracies, biases, offensive responses, toxicity, and hallucinations. Just as with any other LLM, we advise users of this technology
to exercise good judgment when applying this technology.*
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_aisquared__dlite-v2-1_5b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 30.03 |
| ARC (25-shot) | 32.59 |
| HellaSwag (10-shot) | 53.98 |
| MMLU (5-shot) | 24.93 |
| TruthfulQA (0-shot) | 38.77 |
| Winogrande (5-shot) | 54.7 |
| GSM8K (5-shot) | 0.23 |
| DROP (3-shot) | 5.04 |
|
ammaraldirawi/faster-whisper-small-ar-int8 | ammaraldirawi | 2023-10-26T17:08:01Z | 2,521 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2023-10-26T15:44:58Z | Entry not found |
seb-c/Psydestroyer-20B | seb-c | 2024-03-05T03:19:22Z | 2,521 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:KoboldAI/LLaMA2-13B-Psyfighter2",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-04T10:26:09Z | ---
base_model:
- KoboldAI/LLaMA2-13B-Psyfighter2
library_name: transformers
license: llama2
tags:
- mergekit
- merge
---
# Psydestroyer 20B
I self-merged KoboldAI's Psyfighter-13B to get a 20B model, hoping to make it smarter.
GGUFs: https://huggingface.co/seb-c/Psydestroyer-20B-GGUF
I have only made a Q4_K_M as that is what I tend to use when running 20Bs on my 3060 12GB, but if the demand is there I can make more.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: "KoboldAI/LLaMA2-13B-Psyfighter2"
layer_range: [0, 16]
- sources:
- model: "KoboldAI/LLaMA2-13B-Psyfighter2"
layer_range: [8, 24]
- sources:
- model: "KoboldAI/LLaMA2-13B-Psyfighter2"
layer_range: [17, 32]
- sources:
- model: "KoboldAI/LLaMA2-13B-Psyfighter2"
layer_range: [25, 40]
merge_method: passthrough
dtype: float16
```
|
alpindale/c4ai-command-r-plus-GPTQ | alpindale | 2024-04-17T14:34:17Z | 2,521 | 20 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-04-05T03:08:15Z | ---
license: cc-by-nc-4.0
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
---
# Model Card for C4AI Command R+
🚨 **This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit)**.
## Model Summary
C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering.
C4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is [C4AI Command R](https://huggingface.co/CohereForAI/c4ai-command-r-v01)
Developed by: [Cohere](https://cohere.com/) and [Cohere For AI](https://cohere.for.ai)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: c4ai-command-r-plus
- Model Size: 104 billion parameters
- Context length: 128K
**Try C4AI Command R+**
You can try out C4AI Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus).
**Usage**
Please install `transformers` from the source repository that includes the necessary changes for this model.
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
**Quantized model through bitsandbytes, 8-bit precision**
```python
# pip install 'git+https://github.com/huggingface/transformers.git' bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
**Quantized model through bitsandbytes, 4-bit precision**
This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.
**Languages covered**: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.
Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.
**Context length**: Command R+ supports a context length of 128K.
## Evaluations
Command R+ has been submitted to the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a [standardized way](https://github.com/EleutherAI/lm-evaluation-harness) using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way.
| Model | Average | Arc (Challenge) | Hella Swag | MMLU | Truthful QA | Winogrande | GSM8k |
|:--------------------------------|----------:|------------------:|-------------:|-------:|--------------:|-------------:|--------:|
| **CohereForAI/c4ai-command-r-plus** | 74.6 | 70.99 | 88.6 | 75.7 | 56.3 | 85.4 | 70.7 |
| [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct) | 74.5 | 68.9 | 89 | 73.7 | 66.9 | 81.8 | 66.9 |
| [Mixtral 8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.7 | 70.1 | 87.6 | 71.4 | 65 | 81.1 | 61.1 |
| [Mixtral 8x7B Chat](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.6 | 70.2 | 87.6 | 71.2 | 64.6 | 81.4 | 60.7 |
| [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) | 68.5 | 65.5 | 87 | 68.2 | 52.3 | 81.5 | 56.6 |
| [Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf) | 67.9 | 67.3 | 87.3 | 69.8 | 44.9 | 83.7 | 54.1 |
| [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 65.3 | 65.4 | 84.2 | 74.9 | 55.4 | 80.1 | 31.9 |
| [Gemma-7B](https://huggingface.co/google/gemma-7b) | 63.8 | 61.1 | 82.2 | 64.6 | 44.8 | 79 | 50.9 |
| [LLama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | 62.4 | 64.6 | 85.9 | 63.9 | 52.8 | 80.5 | 26.7 |
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 61 | 60 | 83.3 | 64.2 | 42.2 | 78.4 | 37.8 |
We include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more [here](https://txt.cohere.com/command-r-plus-microsoft-azure/). For evaluation of open ended generation, Command R+ is currently being evaluated on the [chatbot arena](https://chat.lmsys.org/).
### Tool use & multihop capabilities:
Command R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation.
Command R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once.
The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions.
We recommend including the `directly_answer` tool, but it can be removed or renamed if required.
Comprehensive documentation for working with command R+'s tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary>
```python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# Define tools available for the model to use:
tools = [
{
"name": "internet_search",
"description": "Returns a list of relevant document snippets for a textual query retrieved from the internet",
"parameter_definitions": {
"query": {
"description": "Query to search the internet with",
"type": 'str',
"required": True
}
}
},
{
'name': "directly_answer",
"description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history",
'parameter_definitions': {}
}
]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_tool_use_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
print(tool_use_prompt)
```
</details>
<details>
<summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary>
````
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.
## Available Tools
Here is a list of tools that you have available to you:
```python
def internet_search(query: str) -> List[Dict]:
"""Returns a list of relevant document snippets for a textual query retrieved from the internet
Args:
query (str): Query to search the internet with
"""
pass
```
```python
def directly_answer() -> List[Dict]:
"""Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
"""
pass
```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example:
```json
[
{
"tool_name": title of the tool in the specification,
"parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters
}
]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary>
````
Action: ```json
[
{
"tool_name": "internet_search",
"parameters": {
"query": "biggest penguin in the world"
}
}
]
```
````
</details>
### Grounded Generation and RAG Capabilities:
Command R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.
Command R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.
By default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation.
The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.
Comprehensive documentation for working with Command R+'s grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary>
````python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# define documents to ground on:
documents = [
{ "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." },
{ "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."}
]
# render the tool use prompt as a string:
grounded_generation_prompt = tokenizer.apply_grounded_generation_template(
conversation,
documents=documents,
citation_mode="accurate", # or "fast"
tokenize=False,
add_generation_prompt=True,
)
print(grounded_generation_prompt)
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary>
````<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results>
Document: 0
title: Tall penguins
text: Emperor penguins are the tallest growing up to 122 cm in height.
Document: 1
title: Penguin habitats
text: Emperor penguins only live in Antarctica.
</results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line.
Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'.
Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'.
Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup.
Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary>
````
Relevant Documents: 0,1
Cited Documents: 0,1
Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres.
Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0>
````
</details>
### Code Capabilities:
Command R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.
### Model Card Contact
For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]).
### Terms of Use:
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try Chat:
You can try Command R+ chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus). |
mradermacher/AvvoChat_AITA-GGUF | mradermacher | 2024-06-11T21:58:25Z | 2,521 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AndreaAlessandrelli4/AvvoChat_AITA",
"endpoints_compatible",
"region:us"
] | null | 2024-06-11T21:29:09Z | ---
base_model: AndreaAlessandrelli4/AvvoChat_AITA
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AndreaAlessandrelli4/AvvoChat_AITA
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA-GGUF/resolve/main/AvvoChat_AITA.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Helsinki-NLP/opus-mt-en-uk | Helsinki-NLP | 2023-08-16T11:31:36Z | 2,520 | 11 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:04Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-uk
* source languages: en
* target languages: uk
* OPUS readme: [en-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-uk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.uk | 50.2 | 0.674 |
|
mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF | mradermacher | 2024-06-14T09:53:36Z | 2,520 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v2-8B",
"endpoints_compatible",
"region:us"
] | null | 2024-06-14T02:01:42Z | ---
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v2-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Helsinki-NLP/opus-mt-zh-vi | Helsinki-NLP | 2023-08-16T12:09:19Z | 2,519 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"vi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:04Z | ---
language:
- zh
- vi
tags:
- translation
license: apache-2.0
---
### zho-vie
* source group: Chinese
* target group: Vietnamese
* OPUS readme: [zho-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-vie/README.md)
* model: transformer-align
* source language(s): cmn_Hani cmn_Latn
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.vie | 20.0 | 0.385 |
### System Info:
- hf_name: zho-vie
- source_languages: zho
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'vi']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: vie
- short_pair: zh-vi
- chrF2_score: 0.385
- bleu: 20.0
- brevity_penalty: 0.917
- ref_len: 4667.0
- src_name: Chinese
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: vi
- prefer_old: False
- long_pair: zho-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
ridger/MMfreeLM-2.7B | ridger | 2024-05-22T20:00:18Z | 2,519 | 26 | transformers | [
"transformers",
"safetensors",
"hgrn_bit",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T19:50:06Z | Entry not found |
mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF | mradermacher | 2024-06-03T12:22:27Z | 2,519 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"not-for-all-audiences",
"en",
"base_model:Hastagaras/Anjay-8B-Llama3-CrestRoot",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T05:53:27Z | ---
base_model: Hastagaras/Anjay-8B-Llama3-CrestRoot
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Hastagaras/Anjay-8B-Llama3-CrestRoot
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Anjay-8B-Llama3-CrestRoot-GGUF/resolve/main/Anjay-8B-Llama3-CrestRoot.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg | laion | 2023-04-18T22:05:22Z | 2,516 | 5 | open_clip | [
"open_clip",
"tensorboard",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:2201.03545",
"arxiv:1910.04867",
"license:mit",
"region:us"
] | zero-shot-image-classification | 2023-01-10T01:34:39Z | ---
license: mit
pipeline_tag: zero-shot-image-classification
library_name: open_clip
tags:
- clip
---
# Model Card for CLIP-convnext_base_w.laion2B-s13B-b82k-augreg
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
# Model Details
## Model Description
A series of CLIP [ConvNeXt-Base](https://arxiv.org/abs/2201.03545) (w/ wide embed dim) models trained on subsets LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Goals:
* Explore an alternative to ViT and ResNet (w/ AttentionPooling) CLIP models that scales well with model size and image resolution
Firsts:
* First known ConvNeXt CLIP models trained at scale in the range of CLIP ViT-B/16 and RN50x4 models
* First released model weights exploring increase of augmentation + regularization for image tower via adding (greater scale range of RRC, random erasing, stochastic depth)
The models utilize the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Base model (`convnext_base`) as the image tower, and the same text tower as the RN50x4 (depth 12, embed dim 640) model from OpenAI CLIP. The base models are trained at 256x256 image resolution and roughly match the RN50x4 models on FLOPs and activation counts. The models with `320` in the name are trained at 320x320.
All models in this series were trained for 13B samples and have ImageNet Zero-Shot top-1 of >= 70.8%. Comparing to ViT-B/16 at 34B SS with zero-shot of 70.2% (68.1% for 13B SS) this suggests the ConvNeXt architecture may be more sample efficient in this range of model scale. More experiments needed to confirm.
| Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
| ----- | ------- | ---------- | ------------ | --------- |
| [convnext_base_w.laion2b_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K) | LAION-2B | 256x256 | RRC (0.9, 1.0) | 70.8 |
| [convnext_base_w.laion2b_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.5 |
| [convnext_base_w.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K) | LAION-A | 256x256 | RRC (0.9, 1.0) | 71.0 |
| [convnext_base_w_320.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K) | LAION-A | 320x320 | RRC (0.9, 1.0) | 71.7 |
| [convnext_base_w_320.laion_aesthetic_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K-augreg) | LAION-A | 320x320 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.3 |
RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only
LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering.
Model training done by Ross Wightman across both the [stability.ai](https://stability.ai/) cluster and the [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) supercomputer. See acknowledgements below.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with one of (see table in intro):
* LAION-2B - A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
* LAION-Aesthetic - A 900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
All models were trained with a global batch size of 81920 for 64 checkpoint intervals of 203.7M samples for a total of ~13B samples seen over training.
For 256x256 models, a slurm script w/ srun below was used on 20 8-GPU (A100 40GB) nodes (Stability), switching to 40 4-GPU nodes for time on JUWELS.
```
/opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \
--save-frequency 1 \
--name "convnext_256" \
--resume 'latest' \
--train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \
--train-num-samples 203666042 \
--dataset-type webdataset \
--precision amp_bfloat16 \
--warmup 10000 \
--batch-size=512 \
--epochs=64 \
--dataset-resampled \
--clip-grad-norm 5.0 \
--lr 1e-3 \
--workers=6 \
--model "convnext_base_w" \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--grad-checkpointing
```
For 320x320 models, same as above but w/ 32 8-GPU nodes, local batch size 320, or 64 4-GPU nodes on JUWELs.
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
## Results
The models achieve between 70.8 and 71.7 zero-shot top-1 accuracy on ImageNet-1k.

An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
As part of exploring increased augmentation + regularization, early evalations suggest that `augreg` trained models evaluate well over a wider range of resolutions. This is especially true for the 320x320 LAION-A model, where the augreg run was lower than the non-augreg when evaluated at the train resolution of 320x320 (71.3 vs 71.7), but improves to 72.2 when evaluated at 384x384 (the non-augreg drops to 71.0 at 384x384).
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) and the Gauss Centre for Supercomputing e.V. (http://gauss-centre.eu) for funding this part of work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC).
# Citation
**BibTeX:**
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenCLIP software
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
OpenAI CLIP paper
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@Article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
``` |
Yntec/InsaneM3U | Yntec | 2023-07-30T21:42:31Z | 2,516 | 7 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"digiplay",
"cordonsolution8",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-27T22:39:42Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- digiplay
- cordonsolution8
---
# Insane m3u
A mix of m3u by digiplay and insaneRealistic by cordonsolution8.
DEMO images by digiplay!:








Original pages:
https://huggingface.co/digiplay/m3u
https://civitai.com/models/108585/insane-realistic-v10 |
facebook/metaclip-l14-fullcc2.5b | facebook | 2023-10-14T09:05:13Z | 2,516 | 2 | transformers | [
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"vision",
"metaclip",
"arxiv:2309.16671",
"arxiv:2103.00020",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | 2023-10-09T21:16:27Z | ---
license: cc-by-nc-4.0
tags:
- vision
- metaclip
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# MetaCLIP model, large-sized version, patch resolution 14
MetaCLIP model applied to 2.5 billion data points of CommonCrawl (CC). It was introduced in the paper [Demystifying CLIP Data](https://arxiv.org/abs/2309.16671) by Xu et al. and first released in [this repository](https://github.com/facebookresearch/MetaCLIP).
Disclaimer: The team releasing MetaCLIP did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The [Demystifying CLIP Data](https://arxiv.org/abs/2309.16671) paper aims to reveal CLIP’s method around training data curation. OpenAI never open-sourced code regarding their data preparation pipeline.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/clip_overview.jpg"
alt="drawing" width="600"/>
<small> CLIP high-level overview. Taken from the <a href="https://arxiv.org/abs/2103.00020">CLIP paper</a>. </small>
## Intended uses & limitations
You can use the raw model for linking images with text in a shared embedding space. This enables things like zero-shot image classification, text-based image retrieval, image-based text retrieval, etc.
### How to use
We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/clip#usage). Just replace the names of the models on the hub.
### BibTeX entry and citation info
```bibtex
@misc{xu2023demystifying,
title={Demystifying CLIP Data},
author={Hu Xu and Saining Xie and Xiaoqing Ellen Tan and Po-Yao Huang and Russell Howes and Vasu Sharma and Shang-Wen Li and Gargi Ghosh and Luke Zettlemoyer and Christoph Feichtenhofer},
year={2023},
eprint={2309.16671},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
stablediffusionapi/newdream-sdxl-20 | stablediffusionapi | 2023-12-10T00:28:00Z | 2,515 | 1 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2023-12-10T00:25:22Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# NewDream-SDXL 2.0 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "newdream-sdxl-20"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/newdream-sdxl-20)
Model link: [View model](https://stablediffusionapi.com/models/newdream-sdxl-20)
Credits: [View credits](https://civitai.com/?query=NewDream-SDXL%202.0)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "newdream-sdxl-20",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
mradermacher/Asclepius-Llama3-8B-GGUF | mradermacher | 2024-06-13T11:29:28Z | 2,514 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"en",
"dataset:starmpcc/Asclepius-Synthetic-Clinical-Notes",
"base_model:starmpcc/Asclepius-Llama3-8B",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-13T10:23:06Z | ---
base_model: starmpcc/Asclepius-Llama3-8B
datasets:
- starmpcc/Asclepius-Synthetic-Clinical-Notes
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
tags:
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/starmpcc/Asclepius-Llama3-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Asclepius-Llama3-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Asclepius-Llama3-8B-GGUF/resolve/main/Asclepius-Llama3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MaziyarPanahi/mergekit-slerp-bjlsrkr-GGUF | MaziyarPanahi | 2024-06-17T10:08:32Z | 2,514 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-bjlsrkr"
] | text-generation | 2024-06-17T09:46:23Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- conversational
- base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02
- base_model:arcee-ai/sec-mistral-7b-instruct-1.6-epoch
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-bjlsrkr-GGUF
base_model: mergekit-community/mergekit-slerp-bjlsrkr
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-bjlsrkr-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-bjlsrkr-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-bjlsrkr](https://huggingface.co/mergekit-community/mergekit-slerp-bjlsrkr)
## Description
[MaziyarPanahi/mergekit-slerp-bjlsrkr-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-bjlsrkr-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-bjlsrkr](https://huggingface.co/mergekit-community/mergekit-slerp-bjlsrkr).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
AI-Sweden-Models/gpt-sw3-126m | AI-Sweden-Models | 2024-01-29T13:20:08Z | 2,513 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"da",
"sv",
"no",
"en",
"is",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-12-14T12:31:41Z | ---
license: other
language:
- da
- sv
- 'no'
- en
- is
---
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-126m"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No. |
kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1 | kyujinpy | 2024-03-04T12:15:30Z | 2,513 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:argilla/distilabel-math-preference-dpo",
"dataset:kyujinpy/orca_math_dpo",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-25T11:16:37Z | ---
language:
- en
license: cc-by-nc-sa-4.0
datasets:
- Intel/orca_dpo_pairs
- argilla/distilabel-math-preference-dpo
- kyujinpy/orca_math_dpo
pipeline_tag: text-generation
model-index:
- name: Sakura-SOLRCA-Math-Instruct-DPO-v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 72.12
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1
name: Open LLM Leaderboard
---
# **Sakura-SOLRCA-Math-Instruct-DPO-v1**
<img src='./sakura.png' width=512>
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Method**
Using DPO method.
With [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) and [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).
I shared the merge version [kyujinpy/orca_math_dpo](https://huggingface.co/datasets/kyujinpy/orca_math_dpo).
I will share the information about my model. (training and code)
Please see: ⭐[Sakura-SOLAR](https://github.com/KyujinHan/Sakura-SOLAR-DPO).
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Sakura-SOLRCA-Math-Instruct-DPO-v2 | 74.17 | 71.25 | 88.52 | 66.13 | 72.16 | 83.03 | 63.91 |
| Sakura-SOLRCA-Math-Instruct-DPO-v1 | 74.13 | 71.25 | 88.48 | 66.21 | 72.12 | 82.87 | 63.84 |
| Sakura-SOLRCA-Instruct-DPO | 74.05 | 71.16 | 88.49 | 66.17 | 72.10 | 82.95 | 63.46 |
| Sakura-SOLAR-Instruct-DPO-v2 | 74.14 | 70.90 | 88.41 | 66.48 | 71.86 | 83.43 | 63.76 |
| [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | 74.40 | 70.99 | 88.42 | 66.33 | 71.79 | 83.66 | 65.20 |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v1"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__Sakura-SOLRCA-Math-Instruct-DPO-v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.13|
|AI2 Reasoning Challenge (25-Shot)|71.25|
|HellaSwag (10-Shot) |88.48|
|MMLU (5-Shot) |66.21|
|TruthfulQA (0-shot) |72.12|
|Winogrande (5-shot) |82.87|
|GSM8k (5-shot) |63.84|
|
dataautogpt3/ProteusV0.4-Lightning | dataautogpt3 | 2024-02-22T17:14:19Z | 2,513 | 25 | diffusers | [
"diffusers",
"text-to-image",
"license:gpl-3.0",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-02-22T14:50:01Z | ---
pipeline_tag: text-to-image
license: gpl-3.0
---
<Gallery />
## ProteusV0.4: The Style Update Lightning Edition
This update enhances stylistic capabilities, similar to Midjourney's approach, rather than advancing prompt comprehension. Methods used do not infringe on any copyrighted material.
## Proteus
Proteus serves as a sophisticated enhancement over OpenDalleV1.1, leveraging its core functionalities to deliver superior outcomes. Key areas of advancement include heightened responsiveness to prompts and augmented creative capacities. To achieve this, it was fine-tuned using approximately 220,000 GPTV captioned images from copyright-free stock images (with some anime included), which were then normalized. Additionally, DPO (Direct Preference Optimization) was employed through a collection of 10,000 carefully selected high-quality, AI-generated image pairs.
In pursuit of optimal performance, numerous LORA (Low-Rank Adaptation) models are trained independently before being selectively incorporated into the principal model via dynamic application methods. These techniques involve targeting particular segments within the model while avoiding interference with other areas during the learning phase. Consequently, Proteus exhibits marked improvements in portraying intricate facial characteristics and lifelike skin textures, all while sustaining commendable proficiency across various aesthetic domains, notably surrealism, anime, and cartoon-style visualizations.
finetuned/trained on a total of 400k+ images at this point.
## Settings for ProteusV0.4-Lightning
Use these settings for the best results with ProteusV0.4-Lightning :
CFG Scale: Use a CFG scale of 1 to 2
Steps: 4 to 10 steps for more detail, 8 steps for faster results.
Sampler: eular
Scheduler: normal
Resolution: 1280x1280 or 1024x1024
please also consider using these keep words to improve your prompts:
best quality, HD, `~*~aesthetic~*~`.
if you are having trouble coming up with prompts you can use this GPT I put together to help you refine the prompt. https://chat.openai.com/g/g-RziQNoydR-diffusion-master
## Use it with 🧨 diffusers
```python
import torch
from diffusers import (
StableDiffusionXLPipeline,
EulerAncestralDiscreteScheduler,
AutoencoderKL
)
# Load VAE component
vae = AutoencoderKL.from_pretrained(
"madebyollin/sdxl-vae-fp16-fix",
torch_dtype=torch.float16
)
# Configure the pipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
"dataautogpt3/ProteusV0.4-Lightning",
vae=vae,
torch_dtype=torch.float16
)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.to('cuda')
# Define prompts and generate image
prompt = "black fluffy gorgeous dangerous cat animal creature, large orange eyes, big fluffy ears, piercing gaze, full moon, dark ambiance, best quality, extremely detailed"
negative_prompt = "nsfw, bad quality, bad anatomy, worst quality, low quality, low resolutions, extra fingers, blur, blurry, ugly, wrongs proportions, watermark, image artifacts, lowres, ugly, jpeg artifacts, deformed, noisy image"
image = pipe(
prompt,
negative_prompt=negative_prompt,
width=1024,
height=1024,
guidance_scale=2,
num_inference_steps=8
).images[0]
```
please support the work I do through donating to me on:
https://www.buymeacoffee.com/DataVoid
or following me on
https://twitter.com/DataPlusEngine |
cerebras/Cerebras-GPT-590M | cerebras | 2023-11-22T21:47:55Z | 2,512 | 20 | transformers | [
"transformers",
"pytorch",
"gpt2",
"causal-lm",
"text-generation",
"en",
"dataset:the_pile",
"arxiv:2304.03208",
"arxiv:2203.15556",
"arxiv:2101.00027",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-03-20T20:40:39Z | ---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- the_pile
pipeline_tag: text-generation
---
# Cerebras-GPT 590M
Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)!
## Model Description
The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face.
The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models.
All models in the Cerebras-GPT family have been trained in accordance with [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal.
These models were trained on the [Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' [weight streaming technology](https://www.cerebras.net/blog/linear-scaling-made-possible-with-weight-streaming) simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism.
Cerebras systems for pre-training and fine tuning are available in the cloud via the [Cerebras Model Studio](https://www.cerebras.net/product-cloud/). Cerebras CS-2 compatible checkpoints are available in [Cerebras Model Zoo](https://github.com/Cerebras/modelzoo).
## Model Details
* Developed by: [Cerebras Systems](https://www.cerebras.net/)
* License: Apache 2.0
* Model type: Transformer-based Language Model
* Architecture: GPT-3 style architecture
* Data set: The Pile
* Tokenizer: Byte Pair Encoding
* Vocabulary Size: 50257
* Sequence Length: 2048
* Optimizer: AdamW, (β1, β2) = (0.9, 0.95), adam_eps = 1e−8 (1e−9 for larger models)
* Positional Encoding: Learned
* Language: English
* Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use.
**Contact**: To ask questions about Cerebras-GPT models, join the [Cerebras Discord](https://discord.gg/q6bZcMWJVu).
This is the standard parameterization version of Cerebras-GPT with **590M** parameters
Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloads&search=cerebras-gpt)
<br><br>
| Model | Parameters | Layers | d_model | Heads | d_head | d_ffn | LR | BS (seq) | BS (tokens) |
|---------------|------------|--------|---------|-------|--------|--------|----------|----------|----------------|
| Cerebras-GPT | 111M | 10 | 768 | 12 | 64 | 3072 | 6.0E-04 | 120 | 246K |
| Cerebras-GPT | 256M | 14 | 1088 | 17 | 64 | 4352 | 6.0E-04 | 264 | 541K |
| Cerebras-GPT | 590M | 18 | 1536 | 12 | 128 | 6144 | 2.0E-04 | 264 | 541K |
| Cerebras-GPT | 1.3B | 24 | 2048 | 16 | 128 | 8192 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 2.7B | 32 | 2560 | 32 | 80 | 10240 | 2.0E-04 | 528 | 1.08M |
| Cerebras-GPT | 6.7B | 32 | 4096 | 32 | 128 | 16384 | 1.2E-04 | 1040 | 2.13M |
| Cerebras-GPT | 13B | 40 | 5120 | 40 | 128 | 20480 | 1.2E-04 | 720 → 1080 | 1.47M → 2.21M |
<br><br>
## Quickstart
This model can be easily loaded using the AutoModelForCausalLM functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-590M")
model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-590M")
text = "Generative AI is "
```
And can be used with Hugging Face Pipelines
```python
from transformers import pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0]
print(generated_text['generated_text'])
```
or with `model.generate()`
```python
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, num_beams=5,
max_new_tokens=50, early_stopping=True,
no_repeat_ngram_size=2)
text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(text_output[0])
```
<br><br>
## Training data
Cerebras-GPT is trained using [the Pile](https://pile.eleuther.ai) dataset from [EleutherAI](https://www.eleuther.ai). See the [Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed breakdown of data sources and methodology. The Pile was cleaned using the ftfy library to normalize the text, then filtered using scripts provided by Eleuther.
We tokenized the data using byte-pair encoding using the GPT-2 vocabulary. Our tokenized version of the Pile has 371B tokens. We include more details about the training dataset preprocessing in Appendix A.1 of our paper.
Recent works find significant duplicate data present in the Pile. Eleuther’s Pythia applies a deduplication process to reduce replicated data, decreasing the Pile dataset size. Pythia was trained on both the standard dataset and deduplicated dataset to characterize the impact. Our models are trained on the standard Pile without deduplication, which may present an opportunity for further improvement with the deduplicated data set.
<br><br>
## Training procedure
We use the GPT-3 style model architecture. All of our layers use full attention as opposed to the GPT-3 style sparse banded attention. The model shapes were selected to either follow aspect ratio 80 or are the same shape as GPT-3 models. Learning rate warmed up for 375M tokens (1500 steps for 111M and 256M models) and 10x cosine decayed. No dropout was used and weight decay was set to 0.1. All models are trained with MSL of 2048.
All models were trained to Chinchilla point: 20 tokens per model parameter. Number of steps was chosen based on optimal batch size (varied by model) and fixed sequence length (2048). See Training Table, below, for details.
<br>
Model Params | Sequence Length | Batch Size | Number of Steps | Tokens | Tokens per Parameter | Flops
------------ | -------------- | ---------- | --------------- | ------ | -------------------- | -----
111M | 2048 | 120 | 9037 | 2.22E+09 | 20 | 2.6E+18
256M | 2048 | 264 | 9468 | 5.12E+09 | 20 | 1.3E+19
590M | 2048 | 264 | 21836 | 1.18E+10 | 20 | 6.1E+19
1.3B | 2048 | 528 | 24334 | 2.63E+10 | 20 | 2.8E+20
2.7B | 2048 | 528 | 49041 | 5.30E+10 | 20 | 1.1E+21
6.7B | 2048 | 1040 | 62522 | 1.33E+11 | 20 | 6.3E+21
13B | 2048 | 720 | 174335 | 2.57E+11 | 20 | 2.3E+22
<br><br>
## Evaluations
We trained models from smallest to largest and fit a power law as we went along. The power law was helpful for extrapolating the validation loss of the next largest model we trained and provided confidence about whether the training run was going well.
We performed upstream (pre-training) evaluations of text prediction cross-entropy using the Pile validation and test splits. We performed downstream evaluations of text generation accuracy on standardized tasks using the [Eleuther lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Results are compared against many publicly available large language models in Section 3 of the paper.
#### 0-shot Evaluation
| Model | Params | Training FLOPs | PILE test xent | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | Downstream Average |
| ------- | ----- | -------------- | -------------- | ---------- | ----- | ----------- | ------- | ----- | ----- | ---------- | ------------------ |
| Cerebras-GPT | 111M | 2.6E+18 | 2.566 | 0.268 | 0.594 | 0.488 | 0.194 | 0.380 | 0.166 | 0.118 | 0.315 |
| Cerebras-GPT | 256M | 1.3E+19 | 2.299 | 0.274 | 0.613 | 0.511 | 0.293 | 0.410 | 0.170 | 0.158 | 0.347 |
| Cerebras-GPT | 590M | 6.1E+19 | 2.184 | 0.291 | 0.627 | 0.498 | 0.366 | 0.464 | 0.190 | 0.158 | 0.370 |
| Cerebras-GPT | 1.3B | 2.8E+20 | 1.996 | 0.325 | 0.664 | 0.521 | 0.462 | 0.508 | 0.224 | 0.166 | 0.410 |
| Cerebras-GPT | 2.7B | 1.1E+21 | 1.834 | 0.386 | 0.701 | 0.559 | 0.567 | 0.571 | 0.246 | 0.206 | 0.462 |
| Cerebras-GPT | 6.7B | 6.3E+21 | 1.704 | 0.447 | 0.739 | 0.602 | 0.636 | 0.643 | 0.282 | 0.238 | 0.512 |
| Cerebras-GPT | 13B | 2.3E+22 | 1.575 | 0.513 | 0.766 | 0.646 | 0.696 | 0.714 | 0.367 | 0.286 | 0.570 |
#### 5-shot Evaluation
| Model | Params | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA |
| -------- | ----- | ----------| ----- | ----------- | -------| ----- | ----- | ---------- |
| Cerebras-GPT | 111M | 0.267 | 0.588 | 0.475 | 0.158 | 0.356 | 0.166 | 0.136 |
| Cerebras-GPT | 256M | 0.278 | 0.606 | 0.522 | 0.225 | 0.422 | 0.183 | 0.164 |
| Cerebras-GPT | 590M | 0.291 | 0.634 | 0.479 | 0.281 | 0.475 | 0.206 | 0.152 |
| Cerebras-GPT | 1.3B | 0.326 | 0.668 | 0.536 | 0.395 | 0.529 | 0.241 | 0.174 |
| Cerebras-GPT | 2.7B | 0.382 | 0.697 | 0.543 | 0.487 | 0.590 | 0.267 | 0.224 |
| Cerebras-GPT | 6.7B | 0.444 | 0.736 | 0.590 | 0.591 | 0.667 | 0.314 | 0.270 |
| Cerebras-GPT | 13B | 0.514 | 0.768 | 0.674 | 0.655 | 0.743 | 0.398 | 0.318 |
<br><br>
## Uses and Limitations
### Intended Use
The primary intended use is to further research into large language models. These models can be used as a foundation model for NLP, applications, ethics, and alignment research. Our primary intended users are researchers who are working to improve LLMs and practitioners seeking reference implementations, training setups, hyperparameters, or pre-trained models. We release these models with a fully permissive Apache license for the community to use freely.
You may fine-tune and adapt Cerebras-GPT models for deployment via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied beore using the Cerebras-GPT model family in production downstream applications.
Due to financial and compute budgets, Cerebras-GPT models were only trained and evaluated following the approaches described in the paper.
### Out of Scope Use
Cerebras-GPT models are trained on the Pile, with English language only, and are not suitable for machine translation tasks.
Cerebras-GPT models have not been tuned for human-facing dialog applications like chatbots and will not respond to prompts in a similar way to models that have received instruction tuning or reinforcement learning from human feedback (RLHF) like Flan-T5 or ChatGPT. Cerebras-GPT models can be tuned using those methods.
### Risk, Bias, Ethical Considerations
* **Data**: The Pile dataset has been thoroughly analyzed from various ethical standpoints such as toxicity analysis, gender bias, pejorative content, racially sensitive content etc. Please refer to Pile dataset references.
* **Human life**: The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life.
* **Risks and harms**: There can be distributional bias in the Pile dataset that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information.
* **Mitigations**: Only mitigations in standard Pile dataset pre-processing were employed when pre-training Cerebras-GPT.
<br><br>
## Acknowledgements
We are thankful to all Cerebras engineers, past and present, that made this work possible. |
jzli/DreamShaper-8 | jzli | 2024-05-16T14:15:31Z | 2,512 | 1 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-30T01:13:12Z | You can run this model for free at: https://sinkin.ai/m/4zdwGOB
We offer API access at low rates as well |
mradermacher/llama3-tofutune-8b-GGUF | mradermacher | 2024-06-12T14:21:49Z | 2,512 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:simonbutt/llama3-tofutune-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-12T12:09:58Z | ---
base_model: simonbutt/llama3-tofutune-8b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/simonbutt/llama3-tofutune-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-tofutune-8b-GGUF/resolve/main/llama3-tofutune-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/L3-8B-Poppy-Moonfall-C-GGUF | mradermacher | 2024-06-15T08:44:46Z | 2,512 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:v000000/L3-8B-Poppy-Moonfall-C",
"endpoints_compatible",
"region:us"
] | null | 2024-06-12T20:15:43Z | ---
base_model: v000000/L3-8B-Poppy-Moonfall-C
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- llama
- not-for-all-audiences
- nsfw
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/v000000/L3-8B-Poppy-Moonfall-C
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-C-GGUF/resolve/main/L3-8B-Poppy-Moonfall-C.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
facebook/musicgen-medium | facebook | 2023-11-17T15:25:23Z | 2,511 | 84 | transformers | [
"transformers",
"pytorch",
"musicgen",
"text-to-audio",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-06-08T17:28:18Z | ---
inference: true
tags:
- musicgen
license: cc-by-nc-4.0
pipeline_tag: text-to-audio
widget:
- text: a funky house with 80s hip hop vibes
example_title: Prompt 1
- text: a chill song with influences from lofi, chillstep and downtempo
example_title: Prompt 2
- text: a catchy beat for a podcast intro
example_title: Prompt 3
---
# MusicGen - Medium - 1.5B
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
- [small](https://huggingface.co/facebook/musicgen-small)
- [**medium** (this checkpoint)](https://huggingface.co/facebook/musicgen-medium)
- [large](https://huggingface.co/facebook/musicgen-large)
- [melody](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
```
pip install --upgrade pip
pip install --upgrade transformers scipy
```
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
```python
from transformers import pipeline
import scipy
synthesiser = pipeline("text-to-audio", "facebook/musicgen-medium")
music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True})
scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], data=music["audio"])
```
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
```python
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-medium")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-medium")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=256)
```
3. Listen to the audio samples either in an ipynb notebook:
```python
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```python
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt-get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("medium")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284).
**Citation details:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
|---|---|---|---|---|
| facebook/musicgen-small | 4.88 | 1.42 | 0.27 | - |
| **facebook/musicgen-medium** | 5.14 | 1.38 | 0.28 | - |
| facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - |
| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. |
PassionFriend/5G3xbSpKx4cJKz9rZ1FH113NbnMWdc2EJ1aM7YBwwJBp7FPM_vgg | PassionFriend | 2024-03-01T06:35:34Z | 2,511 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-02-06T19:18:05Z | Entry not found |
ptx0/sd3-diffusion-vpred-zsnr | ptx0 | 2024-06-17T03:57:39Z | 2,511 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"full",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"license:creativeml-openrail-m",
"diffusers:StableDiffusion3Pipeline",
"region:us"
] | text-to-image | 2024-06-15T16:21:50Z | ---
license: creativeml-openrail-m
base_model: "stabilityai/stable-diffusion-3-medium-diffusers"
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- full
inference: true
widget:
- text: 'Alien planet, strange rock formations, glowing plants, bizarre creatures, surreal atmosphere'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_0_0.png
- text: 'Alien planet, strange rock formations, glowing plants, bizarre creatures, surreal atmosphere'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_1_1.png
- text: 'Alien planet, strange rock formations, glowing plants, bizarre creatures, surreal atmosphere'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_2_2.png
- text: 'Alien marketplace, bizarre creatures, exotic goods, vibrant colors, otherworldly atmosphere'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_3_0.png
- text: 'Alien marketplace, bizarre creatures, exotic goods, vibrant colors, otherworldly atmosphere'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_4_1.png
- text: 'Alien marketplace, bizarre creatures, exotic goods, vibrant colors, otherworldly atmosphere'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_5_2.png
- text: 'Child holding a balloon, happy expression, colorful balloons, sunny day, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_6_0.png
- text: 'Child holding a balloon, happy expression, colorful balloons, sunny day, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_7_1.png
- text: 'Child holding a balloon, happy expression, colorful balloons, sunny day, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_8_2.png
- text: 'a 4-panel comic strip showing an orange cat saying the words ''HELP'' and ''LASAGNA'''
parameters:
negative_prompt: ''''
output:
url: ./assets/image_9_0.png
- text: 'a 4-panel comic strip showing an orange cat saying the words ''HELP'' and ''LASAGNA'''
parameters:
negative_prompt: ''''
output:
url: ./assets/image_10_1.png
- text: 'a 4-panel comic strip showing an orange cat saying the words ''HELP'' and ''LASAGNA'''
parameters:
negative_prompt: ''''
output:
url: ./assets/image_11_2.png
- text: 'a hand is holding a comic book with a cover that reads ''The Adventures of Superhero'''
parameters:
negative_prompt: ''''
output:
url: ./assets/image_12_0.png
- text: 'a hand is holding a comic book with a cover that reads ''The Adventures of Superhero'''
parameters:
negative_prompt: ''''
output:
url: ./assets/image_13_1.png
- text: 'a hand is holding a comic book with a cover that reads ''The Adventures of Superhero'''
parameters:
negative_prompt: ''''
output:
url: ./assets/image_14_2.png
- text: 'Underground cave filled with crystals, glowing lights, reflective surfaces, fantasy environment, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_15_0.png
- text: 'Underground cave filled with crystals, glowing lights, reflective surfaces, fantasy environment, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_16_1.png
- text: 'Underground cave filled with crystals, glowing lights, reflective surfaces, fantasy environment, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_17_2.png
- text: 'Bustling cyberpunk bazaar, vendors, neon signs, advanced tech, crowded, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_18_0.png
- text: 'Bustling cyberpunk bazaar, vendors, neon signs, advanced tech, crowded, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_19_1.png
- text: 'Bustling cyberpunk bazaar, vendors, neon signs, advanced tech, crowded, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_20_2.png
- text: 'Cyberpunk hacker in a dark room, neon glow, multiple screens, intense focus, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_21_0.png
- text: 'Cyberpunk hacker in a dark room, neon glow, multiple screens, intense focus, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_22_1.png
- text: 'Cyberpunk hacker in a dark room, neon glow, multiple screens, intense focus, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_23_2.png
- text: 'a cybernetic anne of green gables with neural implant and bio mech augmentations'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_24_0.png
- text: 'a cybernetic anne of green gables with neural implant and bio mech augmentations'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_25_1.png
- text: 'a cybernetic anne of green gables with neural implant and bio mech augmentations'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_26_2.png
- text: 'Post-apocalyptic cityscape, ruined buildings, overgrown vegetation, dark and gritty, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_27_0.png
- text: 'Post-apocalyptic cityscape, ruined buildings, overgrown vegetation, dark and gritty, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_28_1.png
- text: 'Post-apocalyptic cityscape, ruined buildings, overgrown vegetation, dark and gritty, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_29_2.png
- text: 'Magical castle in a lush forest, glowing windows, fantasy architecture, high resolution, detailed textures'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_30_0.png
- text: 'Magical castle in a lush forest, glowing windows, fantasy architecture, high resolution, detailed textures'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_31_1.png
- text: 'Magical castle in a lush forest, glowing windows, fantasy architecture, high resolution, detailed textures'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_32_2.png
- text: 'Ruins of an ancient temple in an enchanted forest, glowing runes, mystical creatures, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_33_0.png
- text: 'Ruins of an ancient temple in an enchanted forest, glowing runes, mystical creatures, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_34_1.png
- text: 'Ruins of an ancient temple in an enchanted forest, glowing runes, mystical creatures, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_35_2.png
- text: 'Mystical forest, glowing plants, fairies, magical creatures, fantasy art, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_36_0.png
- text: 'Mystical forest, glowing plants, fairies, magical creatures, fantasy art, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_37_1.png
- text: 'Mystical forest, glowing plants, fairies, magical creatures, fantasy art, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_38_2.png
- text: 'Magical garden with glowing flowers, fairies, serene atmosphere, detailed plants, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_39_0.png
- text: 'Magical garden with glowing flowers, fairies, serene atmosphere, detailed plants, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_40_1.png
- text: 'Magical garden with glowing flowers, fairies, serene atmosphere, detailed plants, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_41_2.png
- text: 'Whimsical garden filled with fairies, magical plants, sparkling lights, serene atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_42_0.png
- text: 'Whimsical garden filled with fairies, magical plants, sparkling lights, serene atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_43_1.png
- text: 'Whimsical garden filled with fairies, magical plants, sparkling lights, serene atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_44_2.png
- text: 'Majestic dragon soaring through the sky, detailed scales, dynamic pose, fantasy art, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_45_0.png
- text: 'Majestic dragon soaring through the sky, detailed scales, dynamic pose, fantasy art, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_46_1.png
- text: 'Majestic dragon soaring through the sky, detailed scales, dynamic pose, fantasy art, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_47_2.png
- text: 'Fantasy world, floating islands in the sky, waterfalls, lush vegetation, detailed landscape, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_48_0.png
- text: 'Fantasy world, floating islands in the sky, waterfalls, lush vegetation, detailed landscape, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_49_1.png
- text: 'Fantasy world, floating islands in the sky, waterfalls, lush vegetation, detailed landscape, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_50_2.png
- text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_51_0.png
- text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_52_1.png
- text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_53_2.png
- text: 'Space battle scene, starships fighting, laser beams, explosions, cosmic background'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_54_0.png
- text: 'Space battle scene, starships fighting, laser beams, explosions, cosmic background'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_55_1.png
- text: 'Space battle scene, starships fighting, laser beams, explosions, cosmic background'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_56_2.png
- text: 'Abandoned fairground at night, eerie rides, ghostly figures, fog, dark atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_57_0.png
- text: 'Abandoned fairground at night, eerie rides, ghostly figures, fog, dark atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_58_1.png
- text: 'Abandoned fairground at night, eerie rides, ghostly figures, fog, dark atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_59_2.png
- text: 'Spooky haunted mansion on a hill, dark and eerie, glowing windows, ghostly atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_60_0.png
- text: 'Spooky haunted mansion on a hill, dark and eerie, glowing windows, ghostly atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_61_1.png
- text: 'Spooky haunted mansion on a hill, dark and eerie, glowing windows, ghostly atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_62_2.png
- text: 'a hardcover physics textbook that is called PHYSICS FOR DUMMIES'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_63_0.png
- text: 'a hardcover physics textbook that is called PHYSICS FOR DUMMIES'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_64_1.png
- text: 'a hardcover physics textbook that is called PHYSICS FOR DUMMIES'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_65_2.png
- text: 'Epic medieval battle, knights in armor, dynamic action, detailed landscape, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_66_0.png
- text: 'Epic medieval battle, knights in armor, dynamic action, detailed landscape, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_67_1.png
- text: 'Epic medieval battle, knights in armor, dynamic action, detailed landscape, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_68_2.png
- text: 'Bustling medieval market with merchants, knights, and jesters, vibrant colors, detailed'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_69_0.png
- text: 'Bustling medieval market with merchants, knights, and jesters, vibrant colors, detailed'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_70_1.png
- text: 'Bustling medieval market with merchants, knights, and jesters, vibrant colors, detailed'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_71_2.png
- text: 'Cozy medieval tavern, warm firelight, adventurers drinking, detailed interior, rustic atmosphere'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_72_0.png
- text: 'Cozy medieval tavern, warm firelight, adventurers drinking, detailed interior, rustic atmosphere'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_73_1.png
- text: 'Cozy medieval tavern, warm firelight, adventurers drinking, detailed interior, rustic atmosphere'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_74_2.png
- text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_75_0.png
- text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_76_1.png
- text: 'Futuristic city skyline at night, neon lights, cyberpunk style, high contrast, sharp focus'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_77_2.png
- text: 'Forest with neon-lit trees, glowing plants, bioluminescence, surreal atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_78_0.png
- text: 'Forest with neon-lit trees, glowing plants, bioluminescence, surreal atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_79_1.png
- text: 'Forest with neon-lit trees, glowing plants, bioluminescence, surreal atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_80_2.png
- text: 'Bright neon sign in a busy city street, ''Open 24 Hours'', bold typography, glowing lights'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_81_0.png
- text: 'Bright neon sign in a busy city street, ''Open 24 Hours'', bold typography, glowing lights'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_82_1.png
- text: 'Bright neon sign in a busy city street, ''Open 24 Hours'', bold typography, glowing lights'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_83_2.png
- text: 'Vibrant neon sign, ''Bar'', bold typography, dark background, glowing lights, detailed design'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_84_0.png
- text: 'Vibrant neon sign, ''Bar'', bold typography, dark background, glowing lights, detailed design'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_85_1.png
- text: 'Vibrant neon sign, ''Bar'', bold typography, dark background, glowing lights, detailed design'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_86_2.png
- text: 'Pirate ship on the high seas, stormy weather, detailed sails, dramatic waves, photorealistic'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_87_0.png
- text: 'Pirate ship on the high seas, stormy weather, detailed sails, dramatic waves, photorealistic'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_88_1.png
- text: 'Pirate ship on the high seas, stormy weather, detailed sails, dramatic waves, photorealistic'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_89_2.png
- text: 'Pirate discovering a treasure chest, detailed gold coins, tropical island, dramatic lighting'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_90_0.png
- text: 'Pirate discovering a treasure chest, detailed gold coins, tropical island, dramatic lighting'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_91_1.png
- text: 'Pirate discovering a treasure chest, detailed gold coins, tropical island, dramatic lighting'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_92_2.png
- text: 'a photograph of a woman experiencing a psychedelic trip. trippy, 8k, uhd, fractal'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_93_0.png
- text: 'a photograph of a woman experiencing a psychedelic trip. trippy, 8k, uhd, fractal'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_94_1.png
- text: 'a photograph of a woman experiencing a psychedelic trip. trippy, 8k, uhd, fractal'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_95_2.png
- text: 'Cozy cafe on a rainy day, people sipping coffee, warm lights, reflections on wet pavement, photorealistic'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_96_0.png
- text: 'Cozy cafe on a rainy day, people sipping coffee, warm lights, reflections on wet pavement, photorealistic'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_97_1.png
- text: 'Cozy cafe on a rainy day, people sipping coffee, warm lights, reflections on wet pavement, photorealistic'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_98_2.png
- text: '1980s arcade, neon lights, vintage game machines, kids playing, vibrant colors, nostalgic atmosphere'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_99_0.png
- text: '1980s arcade, neon lights, vintage game machines, kids playing, vibrant colors, nostalgic atmosphere'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_100_1.png
- text: '1980s arcade, neon lights, vintage game machines, kids playing, vibrant colors, nostalgic atmosphere'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_101_2.png
- text: '1980s game room with vintage arcade machines, neon lights, vibrant colors, nostalgic feel'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_102_0.png
- text: '1980s game room with vintage arcade machines, neon lights, vibrant colors, nostalgic feel'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_103_1.png
- text: '1980s game room with vintage arcade machines, neon lights, vibrant colors, nostalgic feel'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_104_2.png
- text: 'Robot blacksmith forging metal, sparks flying, detailed workshop, futuristic and medieval blend'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_105_0.png
- text: 'Robot blacksmith forging metal, sparks flying, detailed workshop, futuristic and medieval blend'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_106_1.png
- text: 'Robot blacksmith forging metal, sparks flying, detailed workshop, futuristic and medieval blend'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_107_2.png
- text: 'Sleek robot performing a dance, futuristic theater, holographic effects, detailed, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_108_0.png
- text: 'Sleek robot performing a dance, futuristic theater, holographic effects, detailed, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_109_1.png
- text: 'Sleek robot performing a dance, futuristic theater, holographic effects, detailed, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_110_2.png
- text: 'High-tech factory where robots are assembled, detailed machinery, futuristic setting, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_111_0.png
- text: 'High-tech factory where robots are assembled, detailed machinery, futuristic setting, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_112_1.png
- text: 'High-tech factory where robots are assembled, detailed machinery, futuristic setting, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_113_2.png
- text: 'Garden tended by robots, mechanical plants, colorful flowers, futuristic setting, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_114_0.png
- text: 'Garden tended by robots, mechanical plants, colorful flowers, futuristic setting, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_115_1.png
- text: 'Garden tended by robots, mechanical plants, colorful flowers, futuristic setting, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_116_2.png
- text: 'Cute robotic pet, futuristic home, sleek design, detailed features, friendly and animated'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_117_0.png
- text: 'Cute robotic pet, futuristic home, sleek design, detailed features, friendly and animated'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_118_1.png
- text: 'Cute robotic pet, futuristic home, sleek design, detailed features, friendly and animated'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_119_2.png
- text: 'cctv trail camera night time security picture of a wendigo in the woods'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_120_0.png
- text: 'cctv trail camera night time security picture of a wendigo in the woods'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_121_1.png
- text: 'cctv trail camera night time security picture of a wendigo in the woods'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_122_2.png
- text: 'Astronaut exploring an alien planet, detailed landscape, futuristic suit, cosmic background'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_123_0.png
- text: 'Astronaut exploring an alien planet, detailed landscape, futuristic suit, cosmic background'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_124_1.png
- text: 'Astronaut exploring an alien planet, detailed landscape, futuristic suit, cosmic background'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_125_2.png
- text: 'Futuristic space station orbiting a distant exoplanet, sleek design, detailed structures, cosmic backdrop'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_126_0.png
- text: 'Futuristic space station orbiting a distant exoplanet, sleek design, detailed structures, cosmic backdrop'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_127_1.png
- text: 'Futuristic space station orbiting a distant exoplanet, sleek design, detailed structures, cosmic backdrop'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_128_2.png
- text: 'a person holding a sign that reads ''SOON'''
parameters:
negative_prompt: ''''
output:
url: ./assets/image_129_0.png
- text: 'a person holding a sign that reads ''SOON'''
parameters:
negative_prompt: ''''
output:
url: ./assets/image_130_1.png
- text: 'a person holding a sign that reads ''SOON'''
parameters:
negative_prompt: ''''
output:
url: ./assets/image_131_2.png
- text: 'Steampunk airship in the sky, intricate design, Victorian aesthetics, dynamic scene, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_132_0.png
- text: 'Steampunk airship in the sky, intricate design, Victorian aesthetics, dynamic scene, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_133_1.png
- text: 'Steampunk airship in the sky, intricate design, Victorian aesthetics, dynamic scene, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_134_2.png
- text: 'Steampunk inventor in a workshop, intricate gadgets, Victorian attire, mechanical arm, goggles'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_135_0.png
- text: 'Steampunk inventor in a workshop, intricate gadgets, Victorian attire, mechanical arm, goggles'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_136_1.png
- text: 'Steampunk inventor in a workshop, intricate gadgets, Victorian attire, mechanical arm, goggles'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_137_2.png
- text: 'Stormy ocean with towering waves, dramatic skies, detailed water, intense atmosphere, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_138_0.png
- text: 'Stormy ocean with towering waves, dramatic skies, detailed water, intense atmosphere, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_139_1.png
- text: 'Stormy ocean with towering waves, dramatic skies, detailed water, intense atmosphere, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_140_2.png
- text: 'Dramatic stormy sea, lighthouse in the distance, lightning striking, dark clouds, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_141_0.png
- text: 'Dramatic stormy sea, lighthouse in the distance, lightning striking, dark clouds, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_142_1.png
- text: 'Dramatic stormy sea, lighthouse in the distance, lightning striking, dark clouds, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_143_2.png
- text: 'Graffiti artist creating a mural, vibrant colors, urban setting, dynamic action, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_144_0.png
- text: 'Graffiti artist creating a mural, vibrant colors, urban setting, dynamic action, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_145_1.png
- text: 'Graffiti artist creating a mural, vibrant colors, urban setting, dynamic action, high resolution'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_146_2.png
- text: 'Urban alleyway filled with vibrant graffiti art, tags and murals, realistic textures'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_147_0.png
- text: 'Urban alleyway filled with vibrant graffiti art, tags and murals, realistic textures'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_148_1.png
- text: 'Urban alleyway filled with vibrant graffiti art, tags and murals, realistic textures'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_149_2.png
- text: 'Urban street sign, ''Main Street'', bold typography, realistic textures, weathered look'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_150_0.png
- text: 'Urban street sign, ''Main Street'', bold typography, realistic textures, weathered look'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_151_1.png
- text: 'Urban street sign, ''Main Street'', bold typography, realistic textures, weathered look'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_152_2.png
- text: 'Classic car show with vintage vehicles, vibrant colors, nostalgic atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_153_0.png
- text: 'Classic car show with vintage vehicles, vibrant colors, nostalgic atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_154_1.png
- text: 'Classic car show with vintage vehicles, vibrant colors, nostalgic atmosphere, high detail'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_155_2.png
- text: 'Retro diner sign, ''Joe''s Diner'', classic 1950s design, neon lights, weathered look'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_156_0.png
- text: 'Retro diner sign, ''Joe''s Diner'', classic 1950s design, neon lights, weathered look'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_157_1.png
- text: 'Retro diner sign, ''Joe''s Diner'', classic 1950s design, neon lights, weathered look'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_158_2.png
- text: 'Vintage store sign with elaborate typography, ''Antique Shop'', hand-painted, weathered look'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_159_0.png
- text: 'Vintage store sign with elaborate typography, ''Antique Shop'', hand-painted, weathered look'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_160_1.png
- text: 'Vintage store sign with elaborate typography, ''Antique Shop'', hand-painted, weathered look'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_161_2.png
- text: 'a child wearing a pixar style wedding dress, in a play castle'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_162_0.png
- text: 'a child wearing a pixar style wedding dress, in a play castle'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_163_1.png
- text: 'a child wearing a pixar style wedding dress, in a play castle'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_164_2.png
- text: 'a cartoon bear in red shorts playing basketball with a sponge'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_165_0.png
- text: 'a cartoon bear in red shorts playing basketball with a sponge'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_166_1.png
- text: 'a cartoon bear in red shorts playing basketball with a sponge'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_167_2.png
- text: 'a superhero with a cape and a mask, fighting a dragon'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_168_0.png
- text: 'a superhero with a cape and a mask, fighting a dragon'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_169_1.png
- text: 'a superhero with a cape and a mask, fighting a dragon'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_170_2.png
- text: 'a dramatic scene with intense lighting showcasing a man and a woman in a tense conversation'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_171_0.png
- text: 'a dramatic scene with intense lighting showcasing a man and a woman in a tense conversation'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_172_1.png
- text: 'a dramatic scene with intense lighting showcasing a man and a woman in a tense conversation'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_173_2.png
- text: 'a group of people in a house, with a camera crew filming them'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_174_0.png
- text: 'a group of people in a house, with a camera crew filming them'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_175_1.png
- text: 'a group of people in a house, with a camera crew filming them'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_176_2.png
- text: 'a person in a lab coat holding a microphone stands in a forest, talking about the ecosystem'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_177_0.png
- text: 'a person in a lab coat holding a microphone stands in a forest, talking about the ecosystem'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_178_1.png
- text: 'a person in a lab coat holding a microphone stands in a forest, talking about the ecosystem'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_179_2.png
- text: 'a news anchor sitting at a desk, with a screen behind them showing a map of the world'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_180_0.png
- text: 'a news anchor sitting at a desk, with a screen behind them showing a map of the world'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_181_1.png
- text: 'a news anchor sitting at a desk, with a screen behind them showing a map of the world'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_182_2.png
- text: 'a soccer player kicking a ball into a goal, with a crowd cheering'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_183_0.png
- text: 'a soccer player kicking a ball into a goal, with a crowd cheering'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_184_1.png
- text: 'a soccer player kicking a ball into a goal, with a crowd cheering'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_185_2.png
- text: 'a man is holding a sign that says SOON'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_186_0.png
- text: 'a man is holding a sign that says SOON'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_187_1.png
- text: 'a man is holding a sign that says SOON'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_188_2.png
- text: 'a cute anime character named toast holding a sign that says SOON, sitting next to a red square on her left side, and a transparent sphere on her right side'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_189_0.png
- text: 'a cute anime character named toast holding a sign that says SOON, sitting next to a red square on her left side, and a transparent sphere on her right side'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_190_1.png
- text: 'a cute anime character named toast holding a sign that says SOON, sitting next to a red square on her left side, and a transparent sphere on her right side'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_191_2.png
---
# sd3-diffusion-vpred-zsnr
This is a full rank finetune derived from [stabilityai/stable-diffusion-3-medium-diffusers](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers).
This is a **diffusion** model trained using DDPM objective instead of Flow matching. **Be sure to set the appropriate scheduler configuration.**
The main validation prompt used during training was:
```
a cute anime character named toast holding a sign that says SOON, sitting next to a red square on her left side, and a transparent sphere on her right side
```
## Validation settings
- CFG: `5.5`
- CFG Rescale: `0.7`
- Steps: `30`
- Sampler: `euler`
- Seed: `42`
- Resolutions: `1024x1024,1152x960,896x1152`
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 2
- Training steps: 1500
- Learning rate: 4e-07
- Effective batch size: 768
- Micro-batch size: 24
- Gradient accumulation steps: 4
- Number of GPUs: 8
- Prediction type: v_prediction
- Rescaled betas zero SNR: True
- Optimizer: AdamW, stochastic bf16
- Precision: Pure BF16
- Xformers: Enabled
## Datasets
### photo-concept-bucket
- Repeats: 0
- Total number of images: ~559104
- Total number of aspect buckets: 1
- Resolution: 0.5 megapixels
- Cropped: True
- Crop style: center
- Crop aspect: square
## Inference
```python
import torch
from diffusers import StableDiffusion3Pipeline
model_id = "sd3-diffusion-vpred-zsnr"
prompt = "a cute anime character named toast holding a sign that says SOON, sitting next to a red square on her left side, and a transparent sphere on her right side"
negative_prompt = "malformed, disgusting, overexposed, washed-out"
pipeline = DiffusionPipeline.from_pretrained(model_id)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
prompt=prompt,
negative_prompt='',
num_inference_steps=30,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
width=1152,
height=768,
guidance_scale=5.5,
guidance_rescale=0.7,
).images[0]
image.save("output.png", format="PNG")
```
|
THUDM/chatglm3-6b-32k | THUDM | 2024-01-04T03:59:04Z | 2,510 | 242 | transformers | [
"transformers",
"pytorch",
"chatglm",
"glm",
"thudm",
"custom_code",
"zh",
"en",
"arxiv:2103.10360",
"arxiv:2210.02414",
"endpoints_compatible",
"region:us"
] | null | 2023-10-26T13:04:58Z | ---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM3-6B-32K
<p align="center">
💻 <a href="https://github.com/THUDM/ChatGLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2103.10360" target="_blank">[GLM@ACL 22]</a> <a href="https://github.com/THUDM/GLM" target="_blank">[GitHub]</a> • 📃 <a href="https://arxiv.org/abs/2210.02414" target="_blank">[GLM-130B@ICLR 23]</a> <a href="https://github.com/THUDM/GLM-130B" target="_blank">[GitHub]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-25ti5uohv-A_hs~am_D3Q8XPZMpj7wwQ" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM/blob/main/resources/WECHAT.md" target="_blank">WeChat</a>
</p>
<p align="center">
📍Experience the larger-scale ChatGLM model at <a href="https://www.chatglm.cn">chatglm.cn</a>
</p>
## 介绍 (Introduction)
ChatGLM3-6B-32K在[ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b)的基础上进一步强化了对于长文本的理解能力,能够更好的处理最多32K长度的上下文。具体地,我们对位置编码进行了更新,并设计了更有针对性的长文本训练方法,在对话阶段使用 32K 的上下文长度训练。在实际的使用中,如果您面临的上下文长度基本在 **8K 以内**,我们推荐使用[ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b);如果您需要处理**超过 8K** 的上下文长度,我们推荐使用ChatGLM3-6B-32K。
ChatGLM3-6B 是 ChatGLM 系列最新一代的开源模型,在保留了前两代模型对话流畅、部署门槛低等众多优秀特性的基础上,ChatGLM3-6B 引入了如下特性:
1. **更强大的基础模型:** ChatGLM3-6B 的基础模型 ChatGLM3-6B-Base 采用了更多样的训练数据、更充分的训练步数和更合理的训练策略。在语义、数学、推理、代码、知识等不同角度的数据集上测评显示,ChatGLM3-6B-Base 具有在 10B 以下的预训练模型中最强的性能。
2. **更完整的功能支持:** ChatGLM3-6B 采用了全新设计的 [Prompt 格式](https://github.com/THUDM/ChatGLM3/blob/main/README.md),除正常的多轮对话外。同时原生支持[工具调用](https://github.com/THUDM/ChatGLM3/blob/main/tools_using_demo/README.md)(Function Call)、代码执行(Code Interpreter)和 Agent 任务等复杂场景。
3. **更全面的开源序列:** 除了对话模型 ChatGLM3-6B 外,还开源了基础模型 ChatGLM-6B-Base、长文本对话模型 ChatGLM3-6B-32K。以上所有权重对学术研究**完全开放**,在填写[问卷](https://open.bigmodel.cn/mla/form)进行登记后**亦允许免费商业使用**。
Based on [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b), ChatGLM3-6B-32K further strengthens the ability to understand long texts and can better handle contexts up to 32K in length. Specifically, we update the position encoding and design a more targeted long text training method, using a context length of 32K for training in the conversation stage. In actual use, if the context length you face is basically within **8K**, we recommend using [ChatGLM3-6B](https://huggingface.co/THUDM/chatglm3-6b); if you need to handle **For context lengths exceeding 8K**, we recommend using ChatGLM3-6B-32K.
ChatGLM3-6B is the latest open-source model in the ChatGLM series. While retaining many excellent features such as smooth dialogue and low deployment threshold from the previous two generations, ChatGLM3-6B introduces the following features:
1. **More Powerful Base Model:** The base model of ChatGLM3-6B, ChatGLM3-6B-Base, employs a more diverse training dataset, more sufficient training steps, and a more reasonable training strategy. Evaluations on datasets such as semantics, mathematics, reasoning, code, knowledge, etc., show that ChatGLM3-6B-Base has the strongest performance among pre-trained models under 10B.
2. **More Comprehensive Function Support:** ChatGLM3-6B adopts a newly designed [Prompt format](https://github.com/THUDM/ChatGLM3/blob/main/PROMPT_en.md), in addition to the normal multi-turn dialogue. It also natively supports [function call](https://github.com/THUDM/ChatGLM3/blob/main/tools_using_demo/README.md), code interpreter, and complex scenarios such as agent tasks.
3. **More Comprehensive Open-source Series:** In addition to the dialogue model ChatGLM3-6B, the base model ChatGLM-6B-Base and the long-text dialogue model ChatGLM3-6B-32K are also open-sourced. All the weights are **fully open** for academic research, and after completing the [questionnaire](https://open.bigmodel.cn/mla/form) registration, they are also **allowed for free commercial use**.
## 软件依赖 (Dependencies)
```shell
pip install protobuf transformers==4.30.2 cpm_kernels torch>=2.0 gradio mdtex2html sentencepiece accelerate
```
## 代码调用 (Code Usage)
可以通过如下代码调用 ChatGLM3-6B 模型来生成对话:
```ipython
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm3-6b-32k", trust_remote_code=True)
>>> model = AutoModel.from_pretrained("THUDM/chatglm3-6b-32k", trust_remote_code=True).half().cuda()
>>> model = model.eval()
>>> response, history = model.chat(tokenizer, "你好", history=[])
>>> print(response)
你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。
>>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
>>> print(response)
晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法:
1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。
2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。
3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。
4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。
5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。
6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。
如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。
```
关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM)。
For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/ChatGLM).
## 协议 (License)
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM3-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
The code in this repository is open-sourced under the [Apache-2.0 license](LICENSE), while the use of the ChatGLM3-6B model weights needs to comply with the [Model License](MODEL_LICENSE).
## 引用 (Citation)
如果你觉得我们的工作有帮助的话,请考虑引用下列论文。
If you find our work helpful, please consider citing the following papers.
```
@article{zeng2022glm,
title={Glm-130b: An open bilingual pre-trained model},
author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others},
journal={arXiv preprint arXiv:2210.02414},
year={2022}
}
```
```
@inproceedings{du2022glm,
title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={320--335},
year={2022}
}
```
|
mradermacher/Karen_theEditor_13b_HF-i1-GGUF | mradermacher | 2024-06-11T19:14:02Z | 2,510 | 0 | transformers | [
"transformers",
"gguf",
"lora",
"en",
"base_model:FPHam/Karen_theEditor_13b_HF",
"endpoints_compatible",
"region:us"
] | null | 2024-06-11T17:06:52Z | ---
base_model: FPHam/Karen_theEditor_13b_HF
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- lora
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/FPHam/Karen_theEditor_13b_HF
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Karen_theEditor_13b_HF-i1-GGUF/resolve/main/Karen_theEditor_13b_HF.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
TheBloke/psyonic-cetacean-20B-GGUF | TheBloke | 2023-11-29T13:58:31Z | 2,509 | 22 | transformers | [
"transformers",
"gguf",
"llama",
"storywriting",
"text adventure",
"not-for-all-audiences",
"base_model:jebcarter/psyonic-cetacean-20B",
"license:other",
"text-generation-inference",
"region:us"
] | null | 2023-11-29T09:06:45Z | ---
base_model: jebcarter/psyonic-cetacean-20B
inference: false
license: other
license_name: microsoft-research-license
model_creator: Jeb Carter
model_name: Psyonic Cetacean 20B
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- storywriting
- text adventure
- not-for-all-audiences
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Psyonic Cetacean 20B - GGUF
- Model creator: [Jeb Carter](https://huggingface.co/jebcarter)
- Original model: [Psyonic Cetacean 20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jeb Carter's Psyonic Cetacean 20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/psyonic-cetacean-20B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF)
* [Jeb Carter's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jebcarter/psyonic-cetacean-20B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Jeb Carter's Psyonic Cetacean 20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [psyonic-cetacean-20b.Q2_K.gguf](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF/blob/main/psyonic-cetacean-20b.Q2_K.gguf) | Q2_K | 2 | 8.31 GB| 10.81 GB | smallest, significant quality loss - not recommended for most purposes |
| [psyonic-cetacean-20b.Q3_K_S.gguf](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF/blob/main/psyonic-cetacean-20b.Q3_K_S.gguf) | Q3_K_S | 3 | 8.66 GB| 11.16 GB | very small, high quality loss |
| [psyonic-cetacean-20b.Q3_K_M.gguf](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF/blob/main/psyonic-cetacean-20b.Q3_K_M.gguf) | Q3_K_M | 3 | 9.70 GB| 12.20 GB | very small, high quality loss |
| [psyonic-cetacean-20b.Q3_K_L.gguf](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF/blob/main/psyonic-cetacean-20b.Q3_K_L.gguf) | Q3_K_L | 3 | 10.63 GB| 13.13 GB | small, substantial quality loss |
| [psyonic-cetacean-20b.Q4_0.gguf](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF/blob/main/psyonic-cetacean-20b.Q4_0.gguf) | Q4_0 | 4 | 11.29 GB| 13.79 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [psyonic-cetacean-20b.Q4_K_S.gguf](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF/blob/main/psyonic-cetacean-20b.Q4_K_S.gguf) | Q4_K_S | 4 | 11.34 GB| 13.84 GB | small, greater quality loss |
| [psyonic-cetacean-20b.Q4_K_M.gguf](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF/blob/main/psyonic-cetacean-20b.Q4_K_M.gguf) | Q4_K_M | 4 | 12.04 GB| 14.54 GB | medium, balanced quality - recommended |
| [psyonic-cetacean-20b.Q5_0.gguf](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF/blob/main/psyonic-cetacean-20b.Q5_0.gguf) | Q5_0 | 5 | 13.77 GB| 16.27 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [psyonic-cetacean-20b.Q5_K_S.gguf](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF/blob/main/psyonic-cetacean-20b.Q5_K_S.gguf) | Q5_K_S | 5 | 13.77 GB| 16.27 GB | large, low quality loss - recommended |
| [psyonic-cetacean-20b.Q5_K_M.gguf](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF/blob/main/psyonic-cetacean-20b.Q5_K_M.gguf) | Q5_K_M | 5 | 14.16 GB| 16.66 GB | large, very low quality loss - recommended |
| [psyonic-cetacean-20b.Q6_K.gguf](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF/blob/main/psyonic-cetacean-20b.Q6_K.gguf) | Q6_K | 6 | 16.40 GB| 18.90 GB | very large, extremely low quality loss |
| [psyonic-cetacean-20b.Q8_0.gguf](https://huggingface.co/TheBloke/psyonic-cetacean-20B-GGUF/blob/main/psyonic-cetacean-20b.Q8_0.gguf) | Q8_0 | 8 | 21.25 GB| 23.75 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/psyonic-cetacean-20B-GGUF and below it, a specific filename to download, such as: psyonic-cetacean-20b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/psyonic-cetacean-20B-GGUF psyonic-cetacean-20b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/psyonic-cetacean-20B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/psyonic-cetacean-20B-GGUF psyonic-cetacean-20b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m psyonic-cetacean-20b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./psyonic-cetacean-20b.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./psyonic-cetacean-20b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jeb Carter's Psyonic Cetacean 20B

---
Presenting the FP16 files for Psyonic-Cetacean-20B! This is an experimental Llama2-based stack merge based on the models and recipe below:
- [KoboldAI/PsyFighter-2-13b](https://huggingface.co/KoboldAI/Psyfighter-2-13B)
- [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
```yaml
slices:
- sources:
- model: Orca2flat
layer_range: [0, 16]
- sources:
- model: /KoboldAI/Psyfighter-2-13B (FP16 not yet available)
layer_range: [8, 24]
- sources:
- model: Orca2flat
layer_range: [17, 32]
- sources:
- model: /KoboldAI/Psyfighter-2-13B (FP16 not yet available)
layer_range: [25, 40]
merge_method: passthrough
dtype: float16
```
Note: while we did run an inverted merge the output was not satisfactory and will not be released.
We first flatted the additional ChatML vocabulary tokens out of Orca-2-13B, then performed a stack merge with Psyfighter-2-13B. The results surprised us with their vividness, freshness of prose, obedience to instruction prompting, and formatting cohesion.
This model is focused on storywriting and text adventure, with a side order of Assistant and Chat functionality. Like its ancestor Psyfighter-2 this model will function better if you let it improvise and riff on your concepts rather than feeding it an excess of detail.
Additionally, either the removal of the ChatML vocab or the stack merging process itself has resulted in not only an uncensored model but an actively anti-censored model, so please be aware that this model can and will kill you during adventures or output NSFW material if prompted accordingly.
During testing, the model exhibited an especially strong affinity for science fiction and space opera writing, while handling fantasy elements quite well and horror elements slightly less so. Refer to the Psyfighter-2 model card for best prompting practices.
Despite that, we have tested the model out to 16000 context via Rope scaling and the model does not drive towards NSFW on its own. It will follow your tone and style very well.
Please enjoy, and if you encounter anything exciting or weird, please reach out to me at [[email protected]].
Special thanks as always to the KoboldAI crew who provided the mergebox, testing, and feedback on this model.
<!-- original-model-card end -->
|
rdouglas/llama-2-wiki | rdouglas | 2024-06-19T16:31:27Z | 2,508 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-06-19T16:21:58Z | ---
license: apache-2.0
---
|
google/codegemma-1.1-7b-it | google | 2024-06-27T14:10:04Z | 2,507 | 44 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-30T21:33:23Z | ---
library_name: transformers
license: gemma
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
extra_gated_heading: Access CodeGemma on Hugging Face
extra_gated_prompt: To access CodeGemma on Hugging Face, you’re required to review
and agree to Google’s usage license. To do this, please ensure you’re logged-in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
widget:
- text: '<start_of_turn>user Write a Python function to calculate the nth fibonacci
number.<end_of_turn> <start_of_turn>model
'
inference:
parameters:
max_new_tokens: 200
---
# CodeGemma
Model Page
: [CodeGemma](https://ai.google.dev/gemma/docs/codegemma)
Resources and Technical Documentation
: [Technical Report](https://goo.gle/codegemma)
: [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
Terms of Use
: [Terms](https://www.kaggle.com/models/google/codegemma/license/consent/verify/huggingface?returnModelRepoId=google/codegemma-1.1-7b-it)
Authors
: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
CodeGemma is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion.
| | [ **codegemma-2b** ](https://huggingface.co/google/codegemma-1.1-2b) | [codegemma-7b](https://huggingface.co/google/codegemma-7b) | [codegemma-7b-it](https://huggingface.co/google/codegemma-1.1-7b-it) |
|----------------------------------|:----------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------:|
| Code Completion | ✅ | ✅ | |
| Generation from natural language | | ✅ | ✅ |
| Chat | | | ✅ |
| Instruction Following | | | ✅ |
### Sample Usage
This model is intended to answer questions about code fragments, to generate code from natural language, or to engage in a conversation with the user about programming or technical problems. If you need to use code completion (for example, integrated in an IDE), we recommend you use one of the pre-trained models instead: [CodeGemma 7B](https://huggingface.co/google/codegemma-7b), or [CodeGemma 2B](https://huggingface.co/google/codegemma-2b).
#### For Code Generation
```python
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("google/codegemma-1.1-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/codegemma-1.1-7b-it")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/codegemma-1.1-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and Outputs
Inputs
: For pretrained model variants: code prefix and/or suffix for code completion and generation scenarios, or natural language text or prompt
: For instruction tuned model variant: natural language text or prompt
Outputs
: For pretrained model variants: fill-in-the-middle code completion, code and natural language
: For instruction tuned model variant: code and natural language
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
Using Gemma as the base model, CodeGemma 2B and 7B pretrained variants are further trained on an additional 500 to 1000 billion tokens of primarily English language data from publicly available code repositories, open source mathematics datasets and synthetically generated code.
### Training Data Processing
The following data pre-processing techniques were applied:
* FIM Pretrained CodeGemma models focus on fill-in-the-middle (FIM) tasks. The models are trained to work with both PSM and SPM modes. Our FIM settings are 80% to 90% FIM rate with 50-50 PSM/SPM.
* Dependency Graph-based Packing and Unit Test-based Lexical Packing techniques: To improve model alignment with real-world applications, we structured training examples at the project/repository level to co-locate the most relevant source files within each repository. Specifically, we employed two heuristic techniques: dependency graph-based packing and unit test-based lexical packing
* We developed a novel technique for splitting the documents into prefix, middle, and suffix to make the suffix start in a more syntactically natural point rather than purely random distribution.
* Safety: Similarly to Gemma, we deployed rigorous safety filtering including filtering personal data, CSAM filtering and other filtering based on content quality and safety in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Information about the hardware and software used to train the models.
### Hardware
CodeGemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/).
## Evaluation Information
Model evaluation metrics and results.
### Evaluation Approach
We evaluate CodeGemma on a variety of academic benchmarks across several domains:
* Code completion benchmarks: HumanEval Single Line and Multiple Line Infilling
* Code generation benchmarks: HumanEval, MBPP, BabelCode (C++, C#, Go, Java, JavaScript, Kotlin, Python, Rust)
* Q&A: BoolQ, PIQA, TriviaQA
* Natural Language: ARC-Challenge, HellaSwag, MMLU, WinoGrande
* Math Reasoning: GSM8K, MATH
### Evaluation Results
#### Coding Benchmarks
Benchmark | [2B](https://huggingface.co/google/codegemma-2b) | [2B (1.1)](https://huggingface.co/google/codegemma-1.1-2b) | [7B](https://huggingface.co/google/codegemma-7b) | [7B-IT](https://huggingface.co/google/codegemma-7b-it) | [7B-IT (1.1)](https://huggingface.co/google/codegemma-1.1-7b-it)
----------------------|------|----------|------|-------|------------
HumanEval | 31.1 | 37.8 | 44.5 | 56.1 | 60.4
MBPP | 43.6 | 49.2 | 56.2 | 54.2 | 55.6
HumanEval Single Line | 78.4 | 79.3 | 76.1 | 68.3 | 77.4
HumanEval Multi Line | 51.4 | 51.0 | 58.4 | 20.1 | 23.7
BC HE C++ | 24.2 | 19.9 | 32.9 | 42.2 | 46.6
BC HE C# | 10.6 | 26.1 | 22.4 | 26.7 | 54.7
BC HE Go | 20.5 | 18.0 | 21.7 | 28.6 | 34.2
BC HE Java | 29.2 | 29.8 | 41.0 | 48.4 | 50.3
BC HE JavaScript | 21.7 | 28.0 | 39.8 | 46.0 | 48.4
BC HE Kotlin | 28.0 | 32.3 | 39.8 | 51.6 | 47.8
BC HE Python | 21.7 | 36.6 | 42.2 | 48.4 | 54.0
BC HE Rust | 26.7 | 24.2 | 34.1 | 36.0 | 37.3
BC MBPP C++ | 47.1 | 38.9 | 53.8 | 56.7 | 63.5
BC MBPP C# | 28.7 | 45.3 | 32.5 | 41.2 | 62.0
BC MBPP Go | 45.6 | 38.9 | 43.3 | 46.2 | 53.2
BC MBPP Java | 41.8 | 49.7 | 50.3 | 57.3 | 62.9
BC MBPP JavaScript | 45.3 | 45.0 | 58.2 | 61.4 | 61.4
BC MBPP Kotlin | 46.8 | 49.7 | 54.7 | 59.9 | 62.6
BC MBPP Python | 38.6 | 52.9 | 59.1 | 62.0 | 60.2
BC MBPP Rust | 45.3 | 47.4 | 52.9 | 53.5 | 52.3
#### Natural Language Benchmarks

## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including:
* Human evaluation on prompts covering content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach.
* Specific testing of cyber-offence capabilities, focusing on testing autonomous hacking capabilities and ensuring potential harms are limited.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_results) for more details.
## Model Usage & Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Code Gemma models have a wide range of applications, which vary between IT and PT models. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development.
Code Completion
: PT models can be used to complete code with an IDE extension
Code Generation
: IT model can be used to generate code with or without an IDE extension
Code Conversation
: IT model can power conversation interfaces which discuss code.
Code Education
: IT model supports interactive code learning experiences, aids in syntax correction or provides coding practice.
### Known Limitations
Large Language Models (LLMs) have limitations based on their training data and the inherent limitations of the technology. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_results) for more details on the limitations of LLMs.
### Ethical Considerations & Risks
The development of large language models (LLMs) raises several ethical concerns. We have carefully considered multiple aspects in the development of these models. Please refer to [the same discussion](https://ai.google.dev/gemma/docs/model_card#ethical_considerations_and_risks) in the Gemma model card for model details.
### Benefits
At the time of release, this family of models provides high-performance open code-focused large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models.
Using the coding benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. |
ku-nlp/deberta-v2-tiny-japanese-char-wwm | ku-nlp | 2023-03-23T07:31:19Z | 2,505 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"fill-mask",
"deberta",
"character",
"wwm",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:oscar",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-01-05T08:48:29Z | ---
language: ja
license: cc-by-sa-4.0
library_name: transformers
tags:
- deberta
- deberta-v2
- fill-mask
- character
- wwm
datasets:
- wikipedia
- cc100
- oscar
metrics:
- accuracy
mask_token: "[MASK]"
widget:
- text: "京都大学で自然言語処理を[MASK][MASK]する。"
---
# Model Card for Japanese character-level DeBERTa V2 tiny
## Model description
This is a Japanese DeBERTa V2 tiny model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
This model is trained with character-level tokenization and whole word masking.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('ku-nlp/deberta-v2-tiny-japanese-char-wwm')
model = AutoModelForMaskedLM.from_pretrained('ku-nlp/deberta-v2-tiny-japanese-char-wwm')
sentence = '京都大学で自然言語処理を[MASK][MASK]する。'
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can also fine-tune this model on downstream tasks.
## Tokenization
There is no need to tokenize texts in advance, and you can give raw texts to the tokenizer.
The texts are tokenized into character-level tokens by [sentencepiece](https://github.com/google/sentencepiece).
## Training data
We used the following corpora for pre-training:
- Japanese Wikipedia (as of 20221020, 3.2GB, 27M sentences, 1.3M documents)
- Japanese portion of CC-100 (85GB, 619M sentences, 66M documents)
- Japanese portion of OSCAR (54GB, 326M sentences, 25M documents)
Note that we filtered out documents annotated with "header", "footer", or "noisy" tags in OSCAR.
Also note that Japanese Wikipedia was duplicated 10 times to make the total size of the corpus comparable to that of CC-100 and OSCAR. As a result, the total size of the training data is 171GB.
## Training procedure
We first segmented texts in the corpora into words using [Juman++ 2.0.0-rc3](https://github.com/ku-nlp/jumanpp/releases/tag/v2.0.0-rc3) for whole word masking.
Then, we built a sentencepiece model with 22,012 tokens including all characters that appear in the training corpus.
We tokenized raw corpora into character-level subwords using the sentencepiece model and trained the Japanese DeBERTa model using [transformers](https://github.com/huggingface/transformers) library.
The training took one day using 8 NVIDIA A100-SXM4-40GB GPUs.
The following hyperparameters were used during pre-training:
- learning_rate: 2e-4
- per_device_train_batch_size: 190
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 6,080
- max_seq_length: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear schedule with warmup
- training_steps: 100,000
- warmup_steps: 10,000
The accuracy of the trained model on the masked language modeling task was 0.499.
The evaluation set consists of 5,000 randomly sampled documents from each of the training corpora.
## Acknowledgments
This work was supported by Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) through General Collaboration Project no. jh221004, "Developing a Platform for Constructing and Sharing of Large-Scale Japanese Language Models".
For training models, we used the mdx: a platform for the data-driven future.
|
hfl/llama-3-chinese-8b-instruct-v2-gguf | hfl | 2024-05-13T03:22:43Z | 2,505 | 17 | null | [
"gguf",
"zh",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-05-07T03:51:08Z | ---
license: apache-2.0
language:
- zh
- en
---
# Llama-3-Chinese-8B-Instruct-v2-GGUF
<p align="center">
<a href="https://github.com/ymcui/Chinese-LLaMA-Alpaca-3"><img src="https://ymcui.com/images/chinese-llama-alpaca-3-banner.png" width="600"/></a>
</p>
This repository contains **Llama-3-Chinese-8B-Instruct-v2-GGUF** (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of [Llama-3-Chinese-8B-Instruct-v2](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2).
**Note: this is an instruction (chat) model, which can be used for conversation, QA, etc.**
Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
## Performance
Metric: PPL, lower is better
*Note: PPL for v2 models are higher than v1, as the v2's base model (Meta-Llama-3-8B-Instruct) also has a larger PPL than v1's (Meta-Llama-3-8B).*
| Quant | Size | PPL |
| :---: | -------: | ------------------: |
| Q2_K | 2.96 GB | 13.2488 +/- 0.17217 |
| Q3_K | 3.74 GB | 6.9618 +/- 0.08420 |
| Q4_0 | 4.34 GB | 6.8925 +/- 0.08437 |
| Q4_K | 4.58 GB | 6.4851 +/- 0.07892 |
| Q5_0 | 5.21 GB | 6.4608 +/- 0.07862 |
| Q5_K | 5.34 GB | 6.3742 +/- 0.07740 |
| Q6_K | 6.14 GB | 6.3494 +/- 0.07703 |
| Q8_0 | 7.95 GB | 6.3110 +/- 0.07673 |
| F16 | 14.97 GB | 6.3005 +/- 0.07658 |
## Others
- For full model, please see: https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2
- For LoRA-only model, please see: https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2-lora
- If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-LLaMA-Alpaca-3 |
mradermacher/EtherealRainbow-v0.2-8B-GGUF | mradermacher | 2024-06-14T01:41:27Z | 2,505 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"not-for-all-audiences",
"en",
"base_model:invisietch/EtherealRainbow-v0.2-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-06-13T17:50:23Z | ---
base_model: invisietch/EtherealRainbow-v0.2-8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/invisietch/EtherealRainbow-v0.2-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/EtherealRainbow-v0.2-8B-GGUF/resolve/main/EtherealRainbow-v0.2-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
uer/gpt2-chinese-poem | uer | 2023-10-17T15:14:25Z | 2,504 | 35 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"zh",
"arxiv:1909.05658",
"arxiv:2212.06385",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: zh
widget:
- text: "[CLS] 万 叠 春 山 积 雨 晴 ,"
- text: "[CLS] 大 漠"
---
# Chinese Poem GPT2 Model
## Model description
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the model could also be pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
The model is used to generate Chinese ancient poems. You can download the model from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-chinese-poem](https://huggingface.co/uer/gpt2-chinese-poem]).
Since the parameter skip_special_tokens is used in the pipelines.py, special tokens such as [SEP], [UNK] will be deleted, the output results of Hosted inference API (right) may not be properly displayed.
## How to use
You can use the model directly with a pipeline for text generation:
When the parameter skip_special_tokens is True:
```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem")
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("[CLS]梅 山 如 积 翠 ,", max_length=50, do_sample=True)
[{'generated_text': '[CLS]梅 山 如 积 翠 , 丛 竹 隠 疏 花 。 水 影 落 寒 濑 , 竹 声 随 暮 鸦 。 茅 茨 数 间 屋 , 烟 火 两 三 家 。 安 得 携 琴 酒 , 相 逢 烟 雨 赊 。 向 湖 边 过 , 偏 怜 雪 里 看 。 浮 峦 如 画 出 , 远 树 与 天 连 。 月 上 僧 房 静 , 风 回 萤 火 寒 。 幽 情 何 可 写 , 赖 有 子 期 弹 。 棠 真'}]
```
When the parameter skip_special_tokens is False:
```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem")
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("[CLS]梅 山 如 积 翠 ,", max_length=100, do_sample=True)
[{'generated_text': '[CLS]梅 山 如 积 翠 , 秀 出 何 其 雄 。 矫 矫 云 间 质 , 映 日 生 玲 珑 。 根 大 乱 石 结 , 枝 高 青 云 蒙 。 常 因 风 露 晚 , 隠 映 瑶 台 中 。 忽 闻 山 石 裂 , 万 里 吹 天 风 。 又 觉 此 身 高 , 迥 出 凡 境 空 。 清 影 落 潭 水 , 暗 香 来 逈 峰 。 却 寻 白 太 白 , 月 影 摇 江 东 。 [SEP] 而 非'}]
```
## Training data
Training data contains 800,000 Chinese ancient poems which are collected by [chinese-poetry](https://github.com/chinese-poetry/chinese-poetry) and [Poetry](https://github.com/Werneror/Poetry) projects.
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 200,000 steps with a sequence length of 128. We use extended vocabulary to handle out-of-vocabulary words. The Chinese character that occurs greater than or equal to 100 in poem corpus is added to the vocabulary.
```
python3 preprocess.py --corpus_path corpora/poem.txt \
--vocab_path models/google_zh_poem_vocab.txt \
--dataset_path poem_dataset.pt --processes_num 16 \
--seq_length 128 --data_processor lm
```
```
python3 pretrain.py --dataset_path poem_dataset.pt \
--vocab_path models/google_zh_poem_vocab.txt \
--config_path models/gpt2/config.json \
--output_model_path models/poem_gpt2_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 200000 --save_checkpoint_steps 50000 --report_steps 1000 \
--learning_rate 5e-4 --batch_size 64
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path models/poem_gpt2_model.bin-200000 \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
@article{zhao2023tencentpretrain,
title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
journal={ACL 2023},
pages={217},
year={2023}
}
``` |
GRMenon/mental-health-mistral-7b-instructv0.2-finetuned-V2 | GRMenon | 2024-01-03T10:12:50Z | 2,504 | 17 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"mistral",
"text-generation",
"transformers",
"Inference Endpoints",
"pytorch",
"text-generation-inference",
"conversational",
"dataset:Amod/mental_health_counseling_conversations",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | text-generation | 2023-12-29T06:18:35Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
- mistral
- text-generation
- transformers
- Inference Endpoints
- pytorch
- text-generation-inference
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mental-health-mistral-7b-instructv0.2-finetuned-V2
results: []
datasets:
- Amod/mental_health_counseling_conversations
---
# mental-health-mistral-7b-instructv0.2-finetuned-V2
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the [mental_health_counseling_conversations](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6432
## Model description
A Mistral-7B-Instruct-v0.2 model finetuned on a corpus of mental health conversations between a psychologist and a user.
The intention was to create a mental health assistant, "Connor", to address user questions based on responses from a psychologist.
## Training and evaluation data
The model is finetuned on a corpus of mental health conversations between a psychologist and a client, in the form of context - response pairs. This dataset is a collection of questions and answers sourced from two online counseling and therapy platforms. The questions cover a wide range of mental health topics, and the answers are provided by qualified psychologists.
Dataset found here :-
* [Kaggle](https://www.kaggle.com/datasets/thedevastator/nlp-mental-health-conversations)
* [Huggingface](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4325 | 1.0 | 352 | 0.9064 |
| 1.2608 | 2.0 | 704 | 0.6956 |
| 1.1845 | 3.0 | 1056 | 0.6432 |
# Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftConfig, PeftModel
base_model = "mistralai/Mistral-7B-Instruct-v0.2"
adapter = "GRMenon/mental-health-mistral-7b-instructv0.2-finetuned-V2"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(
base_model,
add_bos_token=True,
trust_remote_code=True,
padding_side='left'
)
# Create peft model using base_model and finetuned adapter
config = PeftConfig.from_pretrained(adapter)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path,
load_in_4bit=True,
device_map='auto',
torch_dtype='auto')
model = PeftModel.from_pretrained(model, adapter)
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
model.eval()
# Prompt content:
messages = [
{"role": "user", "content": "Hey Connor! I have been feeling a bit down lately.I could really use some advice on how to feel better?"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages,
tokenize=True,
add_generation_prompt=True,
return_tensors='pt').to(device)
output_ids = model.generate(input_ids=input_ids, max_new_tokens=512, do_sample=True, pad_token_id=2)
response = tokenizer.batch_decode(output_ids.detach().cpu().numpy(), skip_special_tokens = True)
# Model response:
print(response[0])
```
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0 |
Einmalumdiewelt/T5-Base_GNAD | Einmalumdiewelt | 2022-08-26T15:55:55Z | 2,503 | 19 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"de",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | 2022-03-02T23:29:04Z | ---
language:
- de
tags:
- generated_from_trainer
- summarization
metrics:
- rouge
model-index:
- name: T5-Base_GNAD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-Base_GNAD
This model is a fine-tuned version of [Einmalumdiewelt/T5-Base_GNAD](https://huggingface.co/Einmalumdiewelt/T5-Base_GNAD) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1025
- Rouge1: 27.5357
- Rouge2: 8.5623
- Rougel: 19.1508
- Rougelsum: 23.9029
- Gen Len: 52.7253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
microsoft/beit-base-patch16-224-pt22k | microsoft | 2023-05-08T14:27:35Z | 2,503 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"beit",
"image-classification",
"vision",
"dataset:imagenet",
"dataset:imagenet-21k",
"arxiv:2106.08254",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-21k
---
# BEiT (base-sized model, pre-trained only)
BEiT model pre-trained in a self-supervised fashion on ImageNet-22k - also called ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import BeitFeatureExtractor, BeitForMaskedImageModeling
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-patch16-224-pt22k')
model = BeitForMaskedImageModeling.from_pretrained('microsoft/beit-base-patch16-224-pt22k')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution. Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
google/seahorse-xxl-q4 | google | 2023-10-26T21:55:30Z | 2,503 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2305.13194",
"arxiv:2204.04991",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-10-13T20:58:23Z | ---
license: cc-by-4.0
---
This is model based on mT5-XXL that predicts a binary label for a given article and summary for Q4 (attribution), as defined in the [SEAHORSE paper](https://arxiv.org/abs/2305.13194) (Clark et al., 2023).
It is trained similarly to the [TRUE paper (Honovich et al, 2022)](https://arxiv.org/pdf/2204.04991.pdf) on human ratings from the SEAHORSE dataset in 6 languages:
- German
- English
- Spanish
- Russian
- Turkish
- Vietnamese
The input format for the model is: "premise: ARTICLE hypothesis: SUMMARY", where ARTICLE is the document being summarized and SUMMARY is the candidate summary.
There is also a smaller (mT5-L) version of this model, as well as metrics trained for each of the other 5 dimensions described in the original paper.
The full citation for the SEAHORSE paper is:
```
@misc{clark2023seahorse,
title={SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation},
author={Elizabeth Clark and Shruti Rijhwani and Sebastian Gehrmann and Joshua Maynez and Roee Aharoni and Vitaly Nikolaev and Thibault Sellam and Aditya Siddhant and Dipanjan Das and Ankur P. Parikh},
year={2023},
eprint={2305.13194},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Contact: [email protected] |
mradermacher/EPFL-TA-Meister-SFT-GGUF | mradermacher | 2024-06-03T11:36:46Z | 2,503 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:PeterAM4/EPFL-TA-Meister-SFT",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T10:46:03Z | ---
base_model: PeterAM4/EPFL-TA-Meister-SFT
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/PeterAM4/EPFL-TA-Meister-SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/EPFL-TA-Meister-SFT-GGUF/resolve/main/EPFL-TA-Meister-SFT.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Salesforce/blip-itm-large-coco | Salesforce | 2023-08-01T14:48:50Z | 2,502 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"blip",
"image-text-matching",
"arxiv:2201.12086",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | 2022-12-13T11:41:12Z | ---
pipeline_tags: 'other'
tags:
- image-text-matching
languages:
- en
license: bsd-3-clause
---
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Model card for BLIP trained on image-text matching - large architecture (with ViT large backbone) trained on COCO dataset.
|  |
|:--:|
| <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>|
## TL;DR
Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract:
*Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.*
## Usage
You can use this model for conditional and un-conditional image captioning
### Using the Pytorch model
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-large-coco")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-large-coco")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt")
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-large-coco")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-large-coco").to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForImageTextRetrieval
processor = BlipProcessor.from_pretrained("Salesforce/blip-itm-large-coco")
model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-large-coco", torch_dtype=torch.float16).to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "A woman and a dog sitting together in a beach."
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
itm_scores = model(**inputs)[0]
cosine_score = model(**inputs, use_itm_head=False)[0]
```
</details>
## BibTex and citation info
```
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
KoboldAI/LLaMA2-13B-Holomax | KoboldAI | 2023-08-17T14:18:33Z | 2,502 | 20 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-14T14:26:32Z | ---
license: other
---
# LLaMA 2 Holomax 13B - The writers version of Mythomax
This is an expansion merge to the well praised Mythomax model from Gryphe (60%) using MrSeeker's KoboldAI Holodeck model (40%)
The goal of this model is to enhance story writing capabilities while preserving the desirable traits of the Mythomax model as much as possible (It does limit chat reply length).
Testers found that this model passes the InteracTV benchmark, was useful for story writing, chatting and text adventures using Instruction mode.
Preservation of factual knowledge has not been tested since we expect the original to be better in those use cases as this merge was focussed on fiction.
## Credits
This merge is not possible without the following models and model authors (Thanks to all of you for your work!)
Mythomax by Gryphe:
- Mythologic-L2 by Gryphe:
- - Hermes by Nous-Research
- Chronos V2 by Elinas
- Airoboros m2.0 by Jondurbin
- Huginn by Face of Goonery:
- - Hermes by Nous-Research
- StableBeluga by StabilityAI
- Airoboros by Jondurbin
- Chronos by Elinas
- Limarp by Lemonila
Holodeck by Mr.Seeker
## Guidelines
This model is designed to be flexible, it should be able to be used as a co-writing model, as well as a variety of instruct formats (Tested with Alpaca) and regular chatting both augmented with traditional formatting and instruct formatting.
The Alpaca format is as follows:
```
### Instruction:
Instruction goes here
### Response:
```
But if you have a different preferred format that works on one of the models above it will likely still work.
## License
After publishing the model we were informed that one of the origin models upstream was uploaded under the AGPLv3, it is currently unknown what effects this has on this model because all weights have been modified and none of the original weights are intact.
At the moment of publishing (and writing this message) both merged models Holodeck and Mythomax were licensed Llama2, therefore the Llama2 license applies to this model.
However, Holodeck contains a non-commercial clause and may only be used for research or private use, while Limarp is licensed AGPLv3.
AGPLv3 conflicts with the commercial usage restrictions of the Llama2 license, therefore we assume this aspect does not apply and the authors indended for commercial usage restrictions to be permitted.
As a result we have decided to leave the model available for public download on the assumption that all involved authors intend for it to be licensed with commercial restrictions / llama2 restrictions in place, but with the further rights and freedoms the AGPLv3 grants a user.
If HF informs us that this assumption is incorrect and requests us to take this model down, we will republish the model in the form of the original merging script that was used to create the end result.
To comply with the AGPLv3 aspect the "source" of this model is as follows (Because this model is made on a binary level, we can only provide the script that created the model):
```
import json
import os
import shutil
import subprocess
from tkinter.filedialog import askdirectory, askopenfilename
import torch
from colorama import Fore, Style, init
from transformers import (AutoModel, AutoModelForCausalLM, AutoTokenizer,
LlamaConfig, LlamaForCausalLM, LlamaTokenizer,
PreTrainedTokenizer, PreTrainedTokenizerFast)
newline = '\n'
def clear_console():
if os.name == "nt": # For Windows
subprocess.call("cls", shell=True)
else: # For Linux and macOS
subprocess.call("clear", shell=True)
clear_console()
print(f"{Fore.YELLOW}Starting script, please wait...{Style.RESET_ALL}")
#mixer output settings
blend_ratio = 0.4 #setting to 0 gives first model, and 1 gives second model
fp16 = False #perform operations in fp16. Saves memory, but CPU inference will not be possible.
always_output_fp16 = True #if true, will output fp16 even if operating in fp32
max_shard_size = "10000MiB" #set output shard size
force_cpu = True #only use cpu
load_sharded = True #load both models shard by shard
print(f"Blend Ratio set to: {Fore.GREEN}{blend_ratio}{Style.RESET_ALL}")
print(f"Operations in fp16 is: {Fore.GREEN}{fp16}{Style.RESET_ALL}")
print(f"Save Result in fp16: {Fore.GREEN}{always_output_fp16}{Style.RESET_ALL}")
print(f"CPU RAM Only: {Fore.GREEN}{force_cpu}{Style.RESET_ALL}{newline}")
#test generation settings, only for fp32
deterministic_test = True #determines if outputs are always the same
test_prompt = "" #test prompt for generation. only for fp32. set to empty string to skip generating.
test_max_length = 32 #test generation length
blend_ratio_b = 1.0 - blend_ratio
def get_model_info(model):
with torch.no_grad():
outfo = ""
cntent = 0
outfo += "\n==============================\n"
for name, para in model.named_parameters():
cntent += 1
outfo += ('{}: {}'.format(name, para.shape))+"\n"
outfo += ("Num Entries: " + str(cntent))+"\n"
outfo += ("==============================\n")
return outfo
def merge_models(model1,model2):
with torch.no_grad():
tensornum = 0
for p1, p2 in zip(model1.parameters(), model2.parameters()):
p1 *= blend_ratio
p2 *= blend_ratio_b
p1 += p2
tensornum += 1
print("Merging tensor "+str(tensornum))
pass
def read_index_filenames(sourcedir):
index = json.load(open(sourcedir + '/pytorch_model.bin.index.json','rt'))
fl = []
for k,v in index['weight_map'].items():
if v not in fl:
fl.append(v)
return fl
print("Opening file dialog, please select FIRST model directory...")
model_path1 = "Gryphe/MythoMax-L2-13b"
print(f"First Model is: {model_path1}")
print("Opening file dialog, please select SECOND model directory...")
model_path2 = "KoboldAI/LLAMA2-13B-Holodeck-1"
print(f"Second Model is: {model_path2}")
print("Opening file dialog, please select OUTPUT model directory...")
model_path3 = askdirectory(title="Select Output Directory of merged model")
print(f"Merged Save Directory is: {model_path3}{newline}")
if not model_path1 or not model_path2:
print("\nYou must select two directories containing models to merge and one output directory. Exiting.")
exit()
with torch.no_grad():
if fp16:
torch.set_default_dtype(torch.float16)
else:
torch.set_default_dtype(torch.float32)
device = torch.device("cuda") if (torch.cuda.is_available() and not force_cpu) else torch.device("cpu")
print(device)
print("Loading Model 1...")
model1 = AutoModelForCausalLM.from_pretrained(model_path1) #,torch_dtype=torch.float16
model1 = model1.to(device)
model1.eval()
print("Model 1 Loaded. Dtype: " + str(model1.dtype))
print("Loading Model 2...")
model2 = AutoModelForCausalLM.from_pretrained(model_path2) #,torch_dtype=torch.float16
model2 = model2.to(device)
model2.eval()
print("Model 2 Loaded. Dtype: " + str(model2.dtype))
# Saving for posterity reasons, handy for troubleshooting if model result is broken
# #ensure both models have the exact same layout
# m1_info = get_model_info(model1)
# m2_info = get_model_info(model2)
# if m1_info != m2_info:
# print("Model 1 Info: " + m1_info)
# print("Model 2 Info: " + m2_info)
# print("\nERROR:\nThe two selected models are not compatible! They must have identical structure!")
# exit()
print("Merging models...")
merge_models(model1,model2)
if model_path3:
print("Saving new model...")
if always_output_fp16 and not fp16:
model1.half()
model1.save_pretrained(model_path3, max_shard_size=max_shard_size)
print("\nSaved to: " + model_path3)
print("\nCopying files to: " + model_path3)
files_to_copy = ["tokenizer.model", "special_tokens_map.json", "tokenizer_config.json", "vocab.json", "merges.txt"]
for filename in files_to_copy:
src_path = os.path.join(model_path1, filename)
dst_path = os.path.join(model_path3, filename)
try:
shutil.copy2(src_path, dst_path)
except FileNotFoundError:
print("\nFile " + filename + " not found in" + model_path1 + ". Skipping.")
else:
print("\nOutput model was not saved as no output path was selected.")
print("\nScript Completed.")
``` |
mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF | mradermacher | 2024-06-10T22:50:15Z | 2,502 | 0 | transformers | [
"transformers",
"gguf",
"th",
"en",
"base_model:airesearch/LLaMa3-8b-WangchanX-sft-Full",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T22:21:32Z | ---
base_model: airesearch/LLaMa3-8b-WangchanX-sft-Full
language:
- th
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/airesearch/LLaMa3-8b-WangchanX-sft-Full
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LLaMa3-8b-WangchanX-sft-Full-GGUF/resolve/main/LLaMa3-8b-WangchanX-sft-Full.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kyujinpy/Sakura-SOLRCA-Instruct-DPO | kyujinpy | 2024-03-04T12:15:01Z | 2,501 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-24T18:12:10Z | ---
language:
- en
license: cc-by-nc-sa-4.0
datasets:
- Intel/orca_dpo_pairs
pipeline_tag: text-generation
model-index:
- name: Sakura-SOLRCA-Instruct-DPO
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Instruct-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Instruct-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Instruct-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 72.1
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Instruct-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Instruct-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.46
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Instruct-DPO
name: Open LLM Leaderboard
---
# **Sakura-SOLRCA-Instruct-DPO**
<img src='./sakura.png' width=512>
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Method**
Using DPO method.
With [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs).
I shared the information about my model. (training and code)
Please see: ⭐[Sakura-SOLAR](https://github.com/KyujinHan/Sakura-SOLAR-DPO).
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Sakura-SOLRCA-Instruct-DPO | 74.05 | 71.16 | 88.49 | 66.17 | 72.10 | 82.95 | 63.46 |
| Sakura-SOLAR-Instruct-DPO-v2 | 74.14 | 70.90 | 88.41 | 66.48 | 71.86 | 83.43 | 63.76 |
| [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | 74.40 | 70.99 | 88.42 | 66.33 | 71.79 | 83.66 | 65.20
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Sakura-SOLRCA-Instruct-DPO"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__Sakura-SOLRCA-Instruct-DPO)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.05|
|AI2 Reasoning Challenge (25-Shot)|71.16|
|HellaSwag (10-Shot) |88.49|
|MMLU (5-Shot) |66.17|
|TruthfulQA (0-shot) |72.10|
|Winogrande (5-shot) |82.95|
|GSM8k (5-shot) |63.46|
|
digiplay/PikasAnimatedMix_v1 | digiplay | 2024-06-08T19:44:26Z | 2,501 | 2 | diffusers | [
"diffusers",
"safetensors",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-06-03T18:12:37Z | ---
license: other
---
Model info:
https://civitai.com/models/32739/pikas-animated-mix?modelVersionId=41291
Original Author's DEMO image :

|
mradermacher/MarshmaToon-3B-model_stock-GGUF | mradermacher | 2024-06-04T04:16:02Z | 2,501 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:DreadPoor/MarshmaToon-3B-model_stock",
"endpoints_compatible",
"region:us"
] | null | 2024-06-04T04:04:46Z | ---
base_model: DreadPoor/MarshmaToon-3B-model_stock
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DreadPoor/MarshmaToon-3B-model_stock
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.IQ3_XS.gguf) | IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.IQ3_S.gguf) | IQ3_S | 1.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.IQ3_M.gguf) | IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.Q5_K_M.gguf) | Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.Q6_K.gguf) | Q6_K | 2.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.Q8_0.gguf) | Q8_0 | 3.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MarshmaToon-3B-model_stock-GGUF/resolve/main/MarshmaToon-3B-model_stock.f16.gguf) | f16 | 5.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
pankajmathur/orca_mini_v3_70b | pankajmathur | 2024-03-04T13:09:40Z | 2,500 | 23 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:psmathur/orca_mini_v1_dataset",
"dataset:ehartford/dolphin",
"arxiv:2306.02707",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-10T02:28:29Z | ---
language:
- en
license: other
library_name: transformers
datasets:
- psmathur/orca_mini_v1_dataset
- ehartford/dolphin
pipeline_tag: text-generation
model-index:
- name: orca_mini_v3_70b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/orca_mini_v3_70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/orca_mini_v3_70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/orca_mini_v3_70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.27
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/orca_mini_v3_70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/orca_mini_v3_70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 40.86
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=psmathur/orca_mini_v3_70b
name: Open LLM Leaderboard
---
# orca_mini_v3_70b
A Llama2-70b model trained on Orca Style datasets.
<br>

<br>
**P.S. If you're interested to collaborate, please connect with me at www.linkedin.com/in/pankajam.**
<br>
### quantized versions
Big thanks to [@TheBloke](https://huggingface.co/TheBloke)
1) https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML
2) https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ
<br>
#### license disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
<br>
## Evaluation
We evaluated orca_mini_v3_70b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|||
|:------:|:--------:|
|**Task**|**Value**|
|*ARC*|0.7125|
|*HellaSwag*|0.8785|
|*MMLU*|0.7018|
|*TruthfulQA*|0.6127|
|*Winogrande*|0.8272|
|*GSM8K*|0.4086|
|*DROP*|0.4017|
|**Total Average**|**0.649**|
<br>
### Prompt Format
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
Tell me about Orcas.
### Assistant:
```
#### OobaBooga Instructions:
This model required upto 45GB GPU VRAM in 4bit so it can be loaded directly on Single RTX 6000/L40/A40/A100/H100 GPU or Double RTX 4090/L4/A10/RTX 3090/RTX A5000
So, if you have access to Machine with 45GB GPU VRAM and have installed [OobaBooga Web UI](https://github.com/oobabooga/text-generation-webui) on it.
You can just download this model by using HF repo link directly on OobaBooga Web UI "Model" Tab/Page & Just use **load-in-4bit** option in it.

After that go to Default Tab/Page on OobaBooga Web UI and **copy paste above prompt format into Input** and Enjoy!

<br>
#### Code Instructions:
Below shows a code example on how to use this model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("psmathur/orca_mini_v3_70b")
model = AutoModelForCausalLM.from_pretrained(
"psmathur/orca_mini_v3_70b",
torch_dtype=torch.float16,
load_in_4bit=True,
low_cpu_mem_usage=True,
device_map="auto"
)
system_prompt = "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n"
#generate text steps
instruction = "Tell me about Orcas."
prompt = f"{system_prompt}### User: {instruction}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary.
<br>
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{orca_mini_v3_70b,
author = {Pankaj Mathur},
title = {orca_mini_v3_70b: An Orca Style Llama2-70b model},
month = {august},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v3_70b},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__orca_mini_v3_70b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 64.9 |
| ARC (25-shot) | 71.25 |
| HellaSwag (10-shot) | 87.85 |
| MMLU (5-shot) | 70.18 |
| TruthfulQA (0-shot) | 61.27 |
| Winogrande (5-shot) | 82.72 |
| GSM8K (5-shot) | 40.86 |
| DROP (3-shot) | 40.17 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__orca_mini_v3_70b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.02|
|AI2 Reasoning Challenge (25-Shot)|71.25|
|HellaSwag (10-Shot) |87.85|
|MMLU (5-Shot) |70.18|
|TruthfulQA (0-shot) |61.27|
|Winogrande (5-shot) |82.72|
|GSM8k (5-shot) |40.86|
|
mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF | mradermacher | 2024-06-05T18:11:33Z | 2,500 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:cpm-ai/Llama3-Ocelot-8B-instruct-v01",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-06-05T17:44:09Z | ---
base_model: cpm-ai/Llama3-Ocelot-8B-instruct-v01
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/cpm-ai/Llama3-Ocelot-8B-instruct-v01
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Ocelot-8B-instruct-v01-GGUF/resolve/main/Llama3-Ocelot-8B-instruct-v01.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
larenspear/llama2-13b-WildJailbreak-Q5_K_M-GGUF | larenspear | 2024-06-30T23:24:00Z | 2,500 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:larenspear/copy_of_wildjailbreak_13",
"license:apache-2.0",
"region:us"
] | null | 2024-06-30T22:31:32Z | ---
base_model: larenspear/copy_of_wildjailbreak_13
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
extra_gated_prompt: Access to this model is automatically granted upon accepting the
[AI2 Responsible Use Guidelines](https://allenai.org/responsible-use.pdf), and completing
all fields below
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the low risk artifact(s): text
I understand that this model is a research artifact that may contain or produce unfiltered, toxic, or harmful material: checkbox
I agree to use this model for research purposes in accordance with the AI2 Responsible Use Guidelines: checkbox
I agree that AI2 may use my information as described in the Privacy Policy: checkbox
I certify that the information I have provided is true and accurate: checkbox
---
# larenspear/copy_of_wildjailbreak_13-Q5_K_M-GGUF
This model was converted to GGUF format from [`larenspear/copy_of_wildjailbreak_13`](https://huggingface.co/larenspear/copy_of_wildjailbreak_13) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/larenspear/copy_of_wildjailbreak_13) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo larenspear/copy_of_wildjailbreak_13-Q5_K_M-GGUF --hf-file copy_of_wildjailbreak_13-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo larenspear/copy_of_wildjailbreak_13-Q5_K_M-GGUF --hf-file copy_of_wildjailbreak_13-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo larenspear/copy_of_wildjailbreak_13-Q5_K_M-GGUF --hf-file copy_of_wildjailbreak_13-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo larenspear/copy_of_wildjailbreak_13-Q5_K_M-GGUF --hf-file copy_of_wildjailbreak_13-q5_k_m.gguf -c 2048
```
|
Yntec/SQUEE | Yntec | 2024-03-20T13:35:25Z | 2,497 | 1 | diffusers | [
"diffusers",
"safetensors",
"Art",
"Fantasy",
"General purpose",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-30T13:51:24Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Art
- Fantasy
- General purpose
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# SQUEE!!!
THE PURPOSE OF THIS MODEL IS TO MAKE YOU SQUEE LIKE A SCHOOLGIRL WHEN YOU SEE WHAT IT GENERATES!
IF YOU'VE NEVER DONE THAT, THIS MODEL MAY NOT BE FOR YOU!!
SAMPLES AND PROMPTS:

(Click for larger)
Top left: cartoon, smile, orange cat, feminine, cartoon cat wearing miner outfit, wearing helmet, cat,full body, amazing composition, lens flare, movie composition, bokeh, deth of field,
Top right: a Tanuki playing guitar in a club. whimsical
Bottom left: a painting of a kangaroo by Bnhr, cute, nature, grass, tree, outdoors, forest, animal focus, antlers,
Bottom right: pretty cute little girl as Marie Antoinette playing on toy piano in bedroom
# Story
This is the model I'd have released if I didn't get the e77777 hash for Jackpot: https://huggingface.co/Yntec/Jackpot
When that happened I just went to release it, but I never finished the model. This is the final version of Jackpot, which sacrifices the cinematic contrast for improved eyes, so I think SQUEE is better for characters, but Jackpot may be better for everything else, like objects, items or landscapes, but both models have the same compositions.
Comparison:

(Click for larger)
More samples and prompts:

(Click for larger)
Top left: young husband and daughter movie still. faces portrait. festive scene at a copper brewery with a wooden keg of beer in the center. sitting cute girl. Display mugs of dark beer accompanied Shirley by halloween ingredients
Top right: Focused gaze, boxer stance, black gloves with red accents, pretty adorable young girl with beautiful eyes, close-up, shallow depth of field, high contrast, cool color temperature, direct lighting, sharp focus on eyes, blurred foreground sparring glove, dynamic tension, determination, sweat-glistening skin, peek-through composition, anticipation atmosphere, gym setting suggested, personal struggle narrative, resilience symbolism
Bottom left: Highly detailed, High Quality, Masterpiece, beautiful, cute girl as toon link, teal headwear, Zelda
Bottom right: closeup photo of a pikachu riding motorcycle, forest, haze, halation, no humans, bloom, dramatic atmosphere, centred, rule of thirds, 200mm 1.4f macro shot |
mradermacher/JSL-MedLlama-3-8B-v9-GGUF | mradermacher | 2024-06-07T12:04:32Z | 2,497 | 1 | transformers | [
"transformers",
"gguf",
"llama-3-8b",
"sft",
"medical",
"en",
"base_model:johnsnowlabs/JSL-MedLlama-3-8B-v9",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-07T09:38:16Z | ---
base_model: johnsnowlabs/JSL-MedLlama-3-8B-v9
language:
- en
library_name: transformers
license: cc-by-nc-nd-4.0
quantized_by: mradermacher
tags:
- llama-3-8b
- sft
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-8B-v9
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/JSL-MedLlama-3-8B-v9-GGUF/resolve/main/JSL-MedLlama-3-8B-v9.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
FredZhang7/anime-anything-promptgen-v2 | FredZhang7 | 2023-03-16T19:33:55Z | 2,496 | 54 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"stable-diffusion",
"anime",
"anything-v4",
"art",
"arxiv:2210.14140",
"en",
"dataset:FredZhang7/anime-prompts-180K",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-02-09T07:29:25Z | ---
license: creativeml-openrail-m
language:
- en
widget:
- text: 1girl, fate
- text: 1boy, league of
- text: 1girl, genshin
- text: 1boy, national basketball association
- text: 1girl, spy x
- text: 1girl, absurdres
tags:
- stable-diffusion
- anime
- anything-v4
- art
- arxiv:2210.14140
datasets:
- FredZhang7/anime-prompts-180K
---
## Fast Anime PromptGen
This model was trained on a dataset of **80,000** safe anime prompts for 3 epochs. I fetched the prompts from the [Safebooru API endpoint](https://safebooru.donmai.us/posts/random.json), but only accepted unique prompts with **up_score ≥ 8** and without any [blacklisted tags](./blacklist.txt).
I didn't release the V1 model because it often generated gibberish prompts. After trying all means to correct that behavior, I eventually figured that the cause of the gibberish prompts is not from the pipeline params, model structure or training duration, but rather from the random usernames in the training data.
Here's the complete [prompt preprocessing algorithm](./preprocess.py).
## Text-to-image Examples
Prefix *1girl* | [Generated *1girl* prompts](./anime_girl_settings.txt) | Model *Anything V4*

Prefix *1boy* | [Generated *1boy* prompts](./anime_boy_settings.txt) | Model *Anything V4*

## Contrastive Search
```
pip install --upgrade transformers
```
```python
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel, pipeline
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model = GPT2LMHeadModel.from_pretrained('FredZhang7/anime-anything-promptgen-v2')
prompt = r'1girl, genshin'
# generate text using fine-tuned model
nlp = pipeline('text-generation', model=model, tokenizer=tokenizer)
# generate 10 samples using contrastive search
outs = nlp(prompt, max_length=76, num_return_sequences=10, do_sample=True, repetition_penalty=1.2, temperature=0.7, top_k=4, early_stopping=True)
print('\nInput:\n' + 100 * '-')
print('\033[96m' + prompt + '\033[0m')
print('\nOutput:\n' + 100 * '-')
for i in range(len(outs)):
# remove trailing commas and double spaces
outs[i] = str(outs[i]['generated_text']).replace(' ', '').rstrip(',')
print('\033[92m' + '\n\n'.join(outs) + '\033[0m\n')
```
Output Example:

Please see [Fast GPT PromptGen](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion-v2) for more info on the pipeline parameters.
## Awesome Tips
- If you feel like a generated anime character doesn't show emotions, try emoticons like `;o`, `:o`, `;p`, `:d`, `:p`, and `;d` in the prompt.
I also use `happy smirk`, `happy smile`, `laughing closed eyes`, etc. to make the characters more lively and expressive.
- Adding `absurdres`, instead of `highres` and `masterpiece`, to a prompt can drastically increase the sharpness and resolution of a generated image.
## Danbooru
[Link to the Danbooru version](https://huggingface.co/FredZhang7/danbooru-tag-generator) |
Aniemore/rubert-large-emotion-russian-cedr-m7 | Aniemore | 2023-04-07T18:09:17Z | 2,496 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"doi:10.57967/hf/1277",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-02-13T18:13:48Z | Entry not found |
Yukang/Llama-2-13b-chat-longlora-32k-sft | Yukang | 2023-10-13T03:36:25Z | 2,496 | 23 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-19T07:47:53Z | **We release the long instruction-following dataset**, [LongAlpaca-12k](https://drive.google.com/file/d/1JVC1p_Ht-1h61tKitOCW0blnCHf-552U/view?usp=share_link) and **the corresponding models**, [LongAlpaca-7B](https://huggingface.co/Yukang/LongAlpaca-7B), [LongAlpaca-13B](https://huggingface.co/Yukang/LongAlpaca-13B), and [LongAlpaca-70B](https://huggingface.co/Yukang/LongAlpaca-70B).
- (*These sft models*, [Llama-2-13b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-13b-chat-longlora-32k-sft) and [Llama-2-70b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k-sft), *have been depreciated*.) |
KBlueLeaf/DanTagGen-gamma | KBlueLeaf | 2024-04-15T16:52:41Z | 2,496 | 8 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"not-for-all-audiences",
"art",
"en",
"dataset:KBlueLeaf/danbooru2023-sqlite",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-15T16:48:14Z | ---
license: cc-by-nc-4.0
datasets:
- KBlueLeaf/danbooru2023-sqlite
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- not-for-all-audiences
- art
widget:
- text: "rating: safe\nartist: <|empty|>\ncharacters: <|empty|>\ncopyrights: <|empty|>\naspect ratio: 1.0\ntarget: <|short|>\ngeneral: 1girl, solo, dragon girl, dragon horns, dragon tail<|input_end|>"
---
# DanTagGen - gamma
DanTagGen(Danbooru Tag Generator) is inspired from p1atdev's dart project.
But with different arch, dataset, format and different training strategy.
## Difference between versions
alpha: pretrain on 2M dataset, smaller batch size. Limited ability<br>
beta: pretrain on 5.3M dataset, larger batch size. More stable, better ability with only a few information provided.<br>
gamma: finetuned from beta, with 3.6M dataset (union of all posts after id 5,000,000 and top25% fav count posts)
## Model arch
This version of DTG is trained from scratch with 400M param LLaMA arch.(In my personal preference I will call it NanoLLaMA)
Since it is llama arch. Theoritically it should be able to be used in any LLaMA inference interface.
This repo also provided converted FP16 gguf model and quantized 8bit/6bit gguf models.
Basically it is recommended to use llama.cpp or llama-cpp-python to run this model. Which will be very fast.
## Format
```python3
prompt = f"""
rating: {rating or '<|empty|>'}
artist: {artist.strip() or '<|empty|>'}
characters: {characters.strip() or '<|empty|>'}
copyrights: {copyrights.strip() or '<|empty|>'}
aspect ratio: {f"{aspect_ratio:.1f}" or '<|empty|>'}
target: {'<|' + target + '|>' if target else '<|long|>'}
general: {", ".join(special_tags)}, {general.strip().strip(",")}<|input_end|>
"""
```
for example:
```
rating: safe
artist: <|empty|>
characters: <|empty|>
copyrights: <|empty|>
aspect ratio: 1.0
target: <|short|>
general: 1girl, solo, dragon girl, dragon horns, dragon tail<|input_end|>
```
And you may get something like:
```
rating: safe
artist: <|empty|>
characters: <|empty|>
copyrights: <|empty|>
aspect ratio: 1.0
target: <|short|>
general: 1girl, solo, dragon girl, dragon horns, dragon tail<|input_end|>open mouth, red eyes, long hair, pointy ears, tail, black hair, chinese clothes, simple background, dragon, hair between eyes, horns, china dress, dress, looking at viewer, breasts
```
## Utilities
HF space: https://huggingface.co/spaces/KBlueLeaf/DTG-demo <br>
SD-WebUI extension (Forge compatible): https://github.com/KohakuBlueleaf/z-a1111-sd-webui-dtg <br>
Third Party ComfyUI Node: https://github.com/toyxyz/a1111-sd-webui-dtg_comfyui |
RichardErkhov/concedo_-_KobbleTinyV2-1.1B-gguf | RichardErkhov | 2024-06-23T18:31:14Z | 2,496 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-23T18:19:38Z | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.