modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
kejcao/llava-phi-2-GGUF | kejcao | "2024-03-08T23:06:54Z" | 896 | 3 | transformers | [
"transformers",
"gguf",
"text-generation",
"en",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:liuhaotian/LLaVA-Pretrain",
"arxiv:2401.02330",
"base_model:marianna13/llava-phi-2-3b",
"license:mit",
"region:us"
] | text-generation | "2024-03-08T21:21:59Z" | ---
base_model: marianna13/llava-phi-2-3b
pipeline_tag: text-generation
inference: false
quantized_by: Kevin Cao
language:
- en
license: mit
library_name: transformers
datasets:
- liuhaotian/LLaVA-Instruct-150K
- liuhaotian/LLaVA-Pretrain
---
# GGUF Quantized LLaVa Phi-2 3B
Original model from [marianna13/llava-phi-2-3b](https://huggingface.co/marianna13/llava-phi-2-3b).
## Provided Files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [ggml-model-Q2_K.gguf](https://huggingface.co/kejcao/llava-phi-2-GGUF/blob/main/ggml-model-Q2_K.gguf) | Q2_K | 2 | 1.17 GB| 3.67 GB | smallest, significant quality loss - not recommended for most purposes |
| [ggml-model-Q3_K_S.gguf](https://huggingface.co/kejcao/llava-phi-2-GGUF/blob/main/ggml-model-Q3_K_S.gguf) | Q3_K_S | 3 | 1.25 GB| 3.75 GB | very small, high quality loss |
| [ggml-model-Q3_K_M.gguf](https://huggingface.co/kejcao/llava-phi-2-GGUF/blob/main/ggml-model-Q3_K_M.gguf) | Q3_K_M | 3 | 1.48 GB| 3.98 GB | very small, high quality loss |
| [ggml-model-Q4_0.gguf](https://huggingface.co/kejcao/llava-phi-2-GGUF/blob/main/ggml-model-Q4_0.gguf) | Q4_0 | 4 | 1.60 GB| 4.10 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [ggml-model-Q3_K_L.gguf](https://huggingface.co/kejcao/llava-phi-2-GGUF/blob/main/ggml-model-Q3_K_L.gguf) | Q3_K_L | 3 | 1.60 GB| 4.10 GB | small, substantial quality loss |
| [ggml-model-Q4_K_S.gguf](https://huggingface.co/kejcao/llava-phi-2-GGUF/blob/main/ggml-model-Q4_K_S.gguf) | Q4_K_S | 4 | 1.62 GB| 4.12 GB | small, greater quality loss |
| [ggml-model-Q4_K_M.gguf](https://huggingface.co/kejcao/llava-phi-2-GGUF/blob/main/ggml-model-Q4_K_M.gguf) | Q4_K_M | 4 | 1.79 GB| 4.29 GB | medium, balanced quality - recommended |
| [ggml-model-Q5_0.gguf](https://huggingface.co/kejcao/llava-phi-2-GGUF/blob/main/ggml-model-Q5_0.gguf) | Q5_0 | 5 | 1.93 GB| 4.43 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [ggml-model-Q5_K_S.gguf](https://huggingface.co/kejcao/llava-phi-2-GGUF/blob/main/ggml-model-Q5_K_S.gguf) | Q5_K_S | 5 | 1.93 GB| 4.43 GB | large, low quality loss - recommended |
| [ggml-model-Q5_K_M.gguf](https://huggingface.co/kejcao/llava-phi-2-GGUF/blob/main/ggml-model-Q5_K_M.gguf) | Q5_K_M | 5 | 2.07 GB| 4.57 GB | large, very low quality loss - recommended |
| [ggml-model-Q6_K.gguf](https://huggingface.co/kejcao/llava-phi-2-GGUF/blob/main/ggml-model-Q6_K.gguf) | Q6_K | 6 | 2.29 GB| 4.79 GB | very large, extremely low quality loss |
| [ggml-model-Q8_0.gguf](https://huggingface.co/kejcao/llava-phi-2-GGUF/blob/main/ggml-model-Q8_0.gguf) | Q8_0 | 8 | 2.96 GB| 5.46 GB | very large, extremely low quality loss - not recommended |
# ORIGINAL MODEL CARD
# Model Card for LLaVa-Phi-2-3B
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [LAION](https://laion.ai/), [SkunkworksAI](https://huggingface.co/SkunkworksAI) & [Ontocord](https://www.ontocord.ai/)
- **Model type:** LLaVA is an open-source chatbot trained by fine-tuning Phi-2 on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture
- **Finetuned from model:** [Phi-2](https://huggingface.co/microsoft/phi-2)
- **License:** MIT
- **Demo:** [llava-phi-2-3b-demo](https://huggingface.co/spaces/marianna13/llava-phi-2-3b-demo)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [BakLLaVa](https://github.com/SkunkworksAI/BakLLaVA)
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Benchmarks
| Model | Parameters |SQA | GQA | TextVQA | POPE |
| --- | --- | --- | --- | --- | --- |
| [LLaVA-1.5](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 7.3B | 68.0| **62.0** | **58.3** | 85.3 |
| [MC-LLaVA-3B](https://huggingface.co/visheratin/MC-LLaVA-3b) | 3B | - | 49.6 | 38.59 | - |
| [LLaVA-Phi](https://arxiv.org/pdf/2401.02330.pdf) | 3B | 68.4 | - | 48.6 | 85.0 |
| [moondream1](https://huggingface.co/vikhyatk/moondream1) | 1.6B | - | 56.3 | 39.8 | - |
| **llava-phi-2-3b** | 3B | **69.0** | 51.2 | 47.0 | **86.0** |
### Image Captioning (MS COCO)
| Model | BLEU_1 | BLEU_2 | BLEU_3 | BLEU_4 | METEOR | ROUGE_L | CIDEr | SPICE |
| -------------------------------------------------------- | ------ | ------ | ------ | ------ | ------ | ------- | ----- | ----- |
| llava-1.5-7b | 75.8 | 59.8 | 45 | 33.3 | 29.4 | 57.7 | 108.8 | 23.5 |
| **llava-phi-2-3b** | 67.7 | 50.5 | 35.7 | 24.2 | 27.0 | 52.4 | 85.0 | 20.7 |
|
Jipski/gpt2-Flo-BasBoettcher-Chefkoch | Jipski | "2021-12-06T21:45:45Z" | 895 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-03-02T23:29:04Z" | Entry not found |
hvein/5HjRFzihd4p1D2aVUZUBtRY3JkSLEnonuPKhDnkkyq84555r_vgg | hvein | "2024-03-09T20:41:55Z" | 895 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-02-14T14:33:19Z" | Entry not found |
teddylee777/Llama-3-Open-Ko-8B-Instruct-preview-gguf | teddylee777 | "2024-05-02T19:50:43Z" | 895 | 14 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"llama-3-ko",
"conversational",
"en",
"ko",
"arxiv:2310.04799",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-24T18:31:35Z" | ---
language:
- en
- ko
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-3-ko
license: other
license_name: llama3
license_link: LICENSE
---
- Original model is [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)
- quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp)
## Template
LM Studio
```
<|start_header_id|>system<|end_header_id|>
{System}<|eot_id|>
<|start_header_id|>user<|end_header_id|>
{User}
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{Assistant}
```
Stop Token
```
<|eot_id|>
<|start_header_id|>
<|end_header_id|>
<|begin_of_text|>
<|end_of_text|>
```
## Llama-3-Open-Ko-8B-Instruct-preview
> Update @ 2024.04.24: Release [Llama-3-Open-Ko-8B model](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) & [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)
## Model Details
**Llama-3-Open-Ko-8B-Instruct-preview**
Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B.
This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts.
With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer).
The train was done on TPUv5e-256, with the warm support from TRC program by Google.
With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview).
Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models.
**Sample usage**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "beomi/Llama-3-Open-Ko-8B-Instruct-preview"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "system", "content": "์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค."},
{"role": "user", "content": "ํผ๋ณด๋์น ์์ด์ด ๋ญ์ผ? ๊ทธ๋ฆฌ๊ณ ํผ๋ณด๋์น ์์ด์ ๋ํด ํ์ด์ฌ ์ฝ๋๋ฅผ ์ง์ค๋ณผ๋?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=1,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
**Sample Output**
์ฌ๋ฌ๋ถ๊ป์๋ ๋ฌผ๋ก ์์๋์ง๋ผ๋ ์ค๋ช
์ ๋๋ฆฌ๊ฒ ์ต๋๋ค! ํผ๋ณด๋์น(Piconacci) ์์ด์ ๋ค์๊ณผ ๊ฐ์ ์์ด์ ๋งํฉ๋๋ค: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233,.... ์ด๋ ๊ฒ ๊ณ์๋๊ณ ์์ต๋๋ค. ์ด ์์ด์ ์ผ๋ฐ์ ์ผ๋ก๋ ์ํ์์ ๋ฌดํํ ์ง์๋ฉ๋๋ค. ๊ฐ ์๋ ์ด์ ์์ ๋ ๋ฐฐ๊ฐ ๋๋ ์์ด์
๋๋ค.
์ด ์์ด์ ํ์ด์ฌ(Python)์ผ๋ก ๊ตฌํํ๊ณ ์ ํ์ ๋ค๋ฉด ๋ค์๊ณผ ๊ฐ์ ์ฝ๋๋ฅผ ์์ฑํ ์ ์์ต๋๋ค.
```python
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
```
์ด ์ฝ๋์์๋ `n`์ด ์
๋ ฅ๋ฐ์ ์๋ฅผ ๋ํ๋ด๋ ๋ณ์๋ก, ํผ๋ณด๋์น ์์ด์ ๊ตฌํํฉ๋๋ค. ์ด ์ฝ๋๋ฅผ ์คํ์ํค๋ฉด ์
๋ ฅ๋ฐ์ ์์ ๋ฐ๋ผ ํผ๋ณด๋์น ์์ด์ ํด๋น ํญ์ ๋ฐํํฉ๋๋ค. ์๋ฅผ ๋ค์ด, `fibonacci(10)` ํ๋ฉด 55๋ฅผ ๋ฐํํฉ๋๋ค.
์ด ์ฝ๋๋ ์ฌ๊ท์ ํจ์์ด๊ธฐ ๋๋ฌธ์ ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ์ด ์ค์ด๋ค ์ ์์ต๋๋ค. ๊ทธ๋ฌ๋ ์ฌ๊ท์ ํจ์๋ ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ๋์ด ๋ง์์ง ์ ์์ต๋๋ค. ์ด๋ฅผ ์ค์ด๊ธฐ ์ํด ์ดํฐ๋ ์ดํฐ๋ธ ํจ์๋ก ๊ตฌํํ ์๋ ์์ต๋๋ค.
```python
a, b = 0, 1
for i in range(n):
a, b = b, a + b
```
์ด ์ฝ๋์์๋ ์ดํฐ๋ ์ดํฐ๋ธ ํจ์๋ก ํผ๋ณด๋์น ์์ด์ ๊ตฌํํฉ๋๋ค. ์ด ์ฝ๋๋ฅผ ์คํ์ํค๋ฉด ์
๋ ฅ๋ฐ์ ์์ ๋ฐ๋ผ ํผ๋ณด๋์น ์์ด์ ํด๋น ํญ์ ๋ฐํํฉ๋๋ค. ์ด ์ฝ๋๋ ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ๋์ ์ค์
๋๋ค. ๋ค๋ง, ์ด ์ฝ๋๋ ๋ ๋ณต์กํฉ๋๋ค. ์ด ์ฝ๋๋ฅผ ๋ ๊ฐ๋จํ๊ฒ ํ๋ ค๋ฉด ์ดํฐ๋ ์ดํฐ๋ธ ํจ์๋ฅผ ๋ ์ ๊ตฌํํ ์ ์์ต๋๋ค.
```python
a, b = 0, 1
while True:
if n == 1:
return a
a, b = b, a + b
n -= 1
```
์ด ์ฝ๋์์๋ ์
๋ ฅ๋ฐ์ ์์ ๋ฐ๋ผ ํผ๋ณด๋์น ์์ด์...
### More samples
> I used same system prompt, but you could change on your own.
```python
messages = [
{"role": "system", "content": "์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค."},
{"role": "user", "content": "ํ๊ตญ์ ์ ํ ํ๋ฒ์ ๋ํด์ ์์ธํ ์ค๋ช
ํด์ฃผ์ธ์."},
]
---
ํ๊ตญ์ ์ ํ ํ๋ฒ์ 1948๋
7์ 17์ผ์ ์ ์ ๋ ๋ํ๋ฏผ๊ตญ ํ๋ฒ์ผ๋ก์, ๋ํ๋ฏผ๊ตญ์ ์๋ฆฝ๊ณผ ํ๋ฒ ์ง์๋ฅผ ์ ํ ํ๋ฒ์ธ๋ฐ์. ์ด ํ๋ฒ์ ๋ํ๋ฏผ๊ตญ์ ๊ธฐ๋ณธ์์น๊ณผ ๊ตญ๋ฏผ์ ๊ถ์ต์ ๊ท์ ํ๊ณ ์์ต๋๋ค. 17์กฐ๋ก ๊ตฌ์ฑ๋ ํ๋ฒ 1021๊ฐ ์กฐํญ์ผ๋ก ๊ตฌ์ฑ๋์ด ์์ต๋๋ค. ํ๋ฒ์ ์ผ๊ถ๋ถ๋ฆฝ, ๊ตญ๋ฏผ์ฃผ๊ถ, ๊ธฐ๋ณธ๊ถ, ์ํ์ ๋, ํ์ ๊ถ, ์
๋ฒ๊ถ, ์ฌ๋ฒ๊ถ ๋ฑ์ผ๋ก ๊ตฌ์ฑ๋์ด ์์ต๋๋ค.
์ฐ์ , ์ผ๊ถ๋ถ๋ฆฝ์ ๋ํต๋ นใ๊ตญํ์์ใ๋ ๋ฒ๊ด์ผ๋ก ์กฐ์ง๋ ์ธ๋ฏผ์ ํต์ ์ ์ํด ๊ตญ๊ฐ ๊ถํ์ ๋๋์ด ์์ํ๊ณ ์์ต๋๋ค. ๋ํต๋ น์ ๊ตญ๊ฐ์์๋ก์ ํ์ ๊ถ์, ๊ตญํ์์์ ์
๋ฒ๊ถ์, ๋๋ฒ์์ ์ฌ๋ฒ๊ถ์ ํํ ์ ์์ต๋๋ค. ์ด์ ๋ฐ๋ผ ํ์ ๋ถใ์
๋ฒ๋ถใ์ฌ๋ฒ๋ถ์ ๊ฒฌ์ ์ ๊ท ํ์ ํตํด ์ ์น์ ์์ ์ฑ์ ํ๋ณดํ๊ณ ์์ต๋๋ค.
๊ตญ๋ฏผ์ฃผ๊ถ์ ํ๋ฒ ์ 1์กฐ์์ "๋ํ๋ฏผ๊ตญ์ ๋ฏผ์ฃผ๊ณตํ๊ตญ"์์ ์ ์ธํ๊ณ , ๋ชจ๋ ๊ถ๋ ฅ์ ๊ตญ๋ฏผ์ผ๋ก๋ถํฐ ๋์จ๋ค๋ ์๋ฆฌ๋ฅผ ๊ท์ ํฉ๋๋ค. ๊ตญ๋ฏผ์ผ๋ฐ์ด ์ต๊ณ ์ ์ฃผ๊ถ์์์ ๋ถ๋ช
ํ ๋ณด์ฌ ์ฃผ๊ณ ์์ต๋๋ค.
์ํ์ ๋๋ ์
๋ฒ๋ถ๋ฅผ ๊ตฌ์ฑํ๋ ์ํ์ ๊ทธ ๊ธฐ๋ฅ์ ์ ํ๊ณ ์์ต๋๋ค. ์์์ ๊ตญ๋ฏผ์ ๋ํ๋ก ์ ์ถ๋๊ณ , ๊ตญ์ ์ด์์ ๋ํ ๊ตญ์ ๊ฐ์ใ๋นํใ๊ฐ์ ์ฑ
์ ๋ง๋ จํฉ๋๋ค. ๋ํ ํ๋ฒ์ ๊ท์ ๋ ์๋ฌด๋ฅผ ์ดํํด์ผ ํฉ๋๋ค.
์
๋ฒ๊ถ๊ณผ ์ฌ๋ฒ๊ถ์ ํ๋ฒ์ ์ ใ๊ฐ์ ์ ๊ด์ฌํ๊ณ , ๋ฒ๋ฅ ์ ์ ์ ใ๊ฐํํ ์ ์์ต๋๋ค. ๋ํ ์ฌ๋ฒ๊ถ์ ํ๋ฒ์ ํด์ใ์ ์ฉํ๊ณ ์์ต๋๋ค. ์ด์ ๋ฐ๋ผ ์ฐ๋ฆฌ๋๋ผ์ ๋ฒ์น์ฃผ์ ์์น์ ๊ตฌํํฉ๋๋ค. ํ๋ง๋๋ก ์ธ๋ฏผ์ ๊ธฐ๋ณธ๊ถ์ ๋ณดํธํ๊ณ ์์ต๋๋ค.
์ด๋ฌํ ๋ํ๋ฏผ๊ตญ์ ํ๋ฒ์ ์ธ๊ฐ์ ์์ ์ ๊ถ๋ฆฌ๋ฅผ ๋ณด์ฅํ๊ธฐ ์ํด ๊ตญ๋ฏผ์ ๊ธฐ๋ณธ๊ถ์ ๊ท์ ํ๊ณ ์์ต๋๋ค. ์๋ฅผ ๋ค์ด ๊ธฐ๋ณธ๊ถ์๋ ๋ชจ๋ ๊ตญ๋ฏผ์ ์ธ๊ฐ์ ์กด์๊ณผ ๊ฐ์น ๋ฐ ํ๋ณต์ถ๊ตฌ๊ถ, ์ผํ๊ณ , ์ผํ๋ ๊ถ๋ฆฌ๋ฅผ ๋ณด์ฅํฉ๋๋ค. ๋ํ ์งํใ๊ฒฐ์ฌใํํ์ ์์ ์ ์ ๊ฑฐ๊ถ์ ๋ณด์ฅํฉ๋๋ค. ์์ ์ ๊ธฐ๋ณธ๊ถ์ ๋ณด์ฅํ ๊ฒ์ ํ๋ฒ์ด ์งํฅํ๋ ๋ฏผ์ฃผ์ ๊ธฐ๋ณธ์ง์ ํ์ฑ์ ๊ตฌํํ๋ ๊ฒ์
๋๋ค. ๋ง์ง๋ง์ผ๋ก ๊ตญ๋ฏผ์ ํ๋ฒ์ ๊ฐ์ ํ๊ณ ์์ผ ์ ์ ์ ํ ์ ์์ต๋๋ค.
์ด์ ๊ฐ์ ๋ํ๋ฏผ๊ตญ์ ์ ํ ํ๋ฒ์ ๊ตญ๋ฏผ์ ๊ธฐ๋ณธ๊ถ๊ณผ ํ๋ฒ ์ง์๋ฅผ ๊ฒฌ๊ณ ํ๊ฒ ํ์ฌ, ์ ์น์ ์์ ์ฑ์ ์ ์งํ๊ณ ์์ต๋๋ค. ํ๋ฒ์ ๋ณ๊ฒฝ์์๋ ๊ณต์ ํ๊ณ ๊ณตํํ ํ๊ฒฐ์ ๊ฑฐ์ณ์ผ ํฉ๋๋ค.
๋ฐ๋ผ์ ๋ํ๋ฏผ๊ตญ์ ์ ํ ํ๋ฒ์ ๊ทธ ์๋ฏธ๊ฐ ๋งค์ฐ ํฝ๋๋ค. ํ๋ฒ์ ํตํด ๋ํ๋ฏผ๊ตญ์ ๋ฏผ์ฃผ์ฃผ์๋ฅผ ํ๊ณ ํ ํ๋ ๊ฒ์
๋๋ค. ๊ตญ๋ฏผ์ ๊ถ๋ฆฌ์ ์๋ฌด๋ฅผ ์ ํ๊ณ ์์ต๋๋ค. ์ดํ๋ฒ์ ๋ํ๋ฏผ๊ตญ์ ๊ธฐ๋ณธ์์น์ด ๋์์ต๋๋ค. 7ใ28์ ์ธ๊ณผ 5ใ16๊ตฐ์ฌํ๋ช
์ ํตํด ํ๋๋์ด ํ๋ฒ์ ์กด์์ฑ์ด ๋ค์ ์ ๋ฆฝ๋์์ต๋๋ค. ์ดํ๋ฒ์ ๋ํ๋ฏผ๊ตญ์ ๊ทผ๊ฐ์ด ๋์์ต๋๋ค.
๋์์ด ํ์ํ์
จ๋์? ์ถ๊ฐ๋ก ์ค๋ช
ํด ๋๋ฆด๊น์?
```
```python
messages = [
{"role": "system", "content": "์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค."},
{"role": "user", "content": "์ด์์ ์ฅ๊ตฐ์ ๋ํด ์ค๋ช
ํด์ค."},
]
---
์ด์์ ์ฅ๊ตฐ์ ์กฐ์ ์๋์ ๋ฌด์ธ์ผ๋ก, ๋ํ๋ฏผ๊ตญ์ ๊ตญ๊ฐ์ ๊ณต์์ธ ์ฐ์์ ๊ณต ์ด์์ ์ฅ๊ตฐ์ 1545๋
9์ 28์ผ ๊ฒฝ์๋จ๋ ์๋ น์์ ํ์ด๋ฌ์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ํต์ ์ฌ ์์ ์ ์๋ ๋ ์กฐ์ ๊ณผ ๋ช
๋๋ผ ์ฐํฉ๊ตฐ์ ๋๊ท๋ชจ ๊ตฐ๋๋ฅผ ์ผ์ผ์ผ ๋์ฒฉ์ ์ฑ๊ณต์ ์ผ๋ก ์ด๋์ด ์ ์ ์ฌ๋๊ณผ ์์ง์๋์ ์น๋ฆฌ๋ก ์ด๋ ์ธ๋ฌผ์
๋๋ค. ๊ทธ๋ 1592๋
์ ๋ผ์ข์์๊ด์ฐฐ์ฌ๊ฐ ๋์ด ์ ๋ผ์ข์์์์ ์์ ์ ๋ฌผ๋ฆฌ์ณค์ผ๋ฉฐ, 1597๋
์๋ ์์๊ณผ ํ์ ๋ฐฉ์ด์ ์ฑ๊ณต์ ์ผ๋ก ์น๋ฃํ์ต๋๋ค. ๋ํ ๋ช
๋๋์ฒฉ์์ ์์ ๊ณผ ๊ฒฉ์ ํ์ฌ ์ด์์ ์ด์์ ์ฅ๊ตฐ์ ๋ช
๋๋์ฒฉ์์ ์กฐ์ ๊ด๊ตฐ์ ์น๋ฆฌ๋ฅผ ์ด๋์์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ์๋ฆฌ๋ฅผ ์งํค๊ธฐ ์ํด ์ ๋ฆฌํ์ ๊ณ ์ํ๋ ๊ฒฐ๋จ์ ๋ด๋ ธ์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ 1598๋
์ฌ์ฒ์ฑ ์ ํฌ์์ ํจ์ ํ ํ ์ ์ธ๊ฐ ์ญ์ ๋ผ ์ ์ธ๊ฐ ๋ถ๋ฆฌํด์ง์, ๋จํํ์ฌ ์ด์์ ์ฅ๊ตฐ์ ๊ฒฐ๊ตญ ์ถฉ๋ฌด๊ณต ์ด์์ ์ ์นญํธ๋ฅผ ๋ฐ์์ต๋๋ค. ๊ทธ์ ๊ณต์ ์ ๋ํ๋ฏผ๊ตญ ์ด์์ ์ฅ๊ตฐ ๊ธฐ๋
๊ด์ผ๋ก ๋ช
์๋ฅผ ๋์ด๊ณ ์์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ๋์ ์ ํ์ ์นญํธ๋ก 1963๋
๊ฑด๊ตญํ์ฅ ์ต๊ณ ํ์ฅ์ ์์ฌ๋ฐ์์ผ๋ฉฐ, ๋ํ๋ฏผ๊ตญ์ ๊ตญ๋ณด ์ 13ํธ๋ก ์ง์ ๋์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ์ ์ค์ ์ธ ์ธ๋ฌผ๋ก ํ๊ตญ์ ์ญ์ฌ์์ ํฐ ์กฑ์ ์ ๋จ๊ฒผ์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ด ์๋ ๋น์์ ์ ํฌ์ฅ์๋ ์ถฉ๋ฌด๊ณต์ด ์ ๋ผ ์ฒ์์๊ฒ ๋๋ผ๋ฅผ ์ํด ์ธ์ด ๊ณณ์ ์ด์์ ์ฅ๊ตฐ์ ๋ฌ๊ฐ ์์ต๋๋ค. ๋๋ผ์ ๊ณ ๋์ ์ง์ฑ ์ด์์ ์ฅ๊ตฐ๋์ ์์
๊ณผ ์๋ฆฌ๋ฅผ ๊ธฐ๋
ํ๋ ๊ณณ์
๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ํ์ ์ ์ด์์ ์ฅ๊ตฐ ๊ธฐ๋
๊ด, ์ด์์ ๊ณต์ ๋ฑ์ด ์์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ๊ณต์ ์ ๊ธฐ๋
ํ๊ธฐ ์ํด ๋ค์ํ ๋ฌธํ์ ์ฐ๊ณผ ๊ธฐ๋
๋ฌผ์ด ์กฐ์ฑ๋์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ์ ์ ์ ๋๋ผ๋ฅผ ์ํด ์ธ์ด ์๋ํ ์ฐ๋ฆฌ์ ์์
์
๋๋ค. 1552๋
12์์๋ ์ด์์ ์ฅ๊ตฐ์ ์ ์ค์ ์๋ ๋๋ค์ ๋ฐฐํฅ, ํ์ฌ๋ ์ถฉ๋ฌด๊ณต ์ด์์ ์ฅ๊ตฐ๊ป์๋ ์ฐ๋ฆฌ๋๋ผ์ ์์ง์ ์ธ๋ฌผ์ด ๋์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ์กฐ์ ์์กฐ์ ์ค์๋ฅผ ๋ฐ๊ณ ์์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ์ ์กฐ์๊ฒ ๋ง์ ๋์๋ฅผ ์ด๋ฃจ์์ต๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ๊ณต์ ์ ์ค๋๋ ์ ์ด๋์ด ์ฃผ๋ ์ธ๋ฌผ์
๋๋ค. ๊ทธ๋ฅผ ๋ชจ์ ๊ฒ์ ์ ๊ตญ์ ์ฌ๋๋ค์
๋๋ค. ์ด์์ ์ฅ๊ตฐ๋์ ๊ณ ๊ฒฐํ ์ ์ ์ ์ฐ๋ฆฌ๋๋ผ๊ฐ ์ด๋ฃฉํ ์ฑ๊ณผ์
๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ๋๋ผ๋ฅผ ์ํด ์ธ์ด ์๋ํ ์ธ๋ฌผ์
๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ๋ํ๋ฏผ๊ตญ์์ ๊ฐ์ฅ ์๋ํ ์ธ๋ฌผ์
๋๋ค. ์ด์์ ์ฅ๊ตฐ์ ํ์ ์ ์์ํ ๊ธฐ์ต ๋๊ฒ ์ง์.
```
```python
messages = [
{"role": "system", "content": "์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค."},
{"role": "user", "content": "์ธ์ข
๋์์ ์งํ์ ๋งฅ๋ถ ๋์ง ์ฌ๊ฑด์ ๋ํด์ ์๊ณ ์์ด?"},
]
---
๋ค, ์ธ์ข
๋์์ ์งํ์ ๋งฅ๋ถ ๋์ง ์ฌ๊ฑด์ ์กฐ์ ์๋ ์ค๊ธฐ์ ๋ฌธํ์ ๋ฐฐ๊ฒฝ๊ณผ ๊ด๋ จ๋ ์ค์ํ ์ฌ๊ฑด์
๋๋ค. 1399๋
10์ 2์ผ์ ์ผ์ด๋ ์ด ์ฌ๊ฑด์ ์ธ์ข
๋์์ด ์ ์ ํ ใ์ฃผํดใ์ ๋ํ ๋นํ๋ฌธ์ธ ใ์์ฐ๋กใ์ ์ ์ ํ ์ต๋ง๋ฆฌ์์ ๋
ผ์์ผ๋ก ์์๋์ต๋๋ค. ๋น์ ์ธ์ข
๋์์ ์งํ์ ํ์ฌ๋ค์๊ฒ ๋ฌธ์๋ถ์ ์ผ์น๋ ๊ฒฌํด๋ฅผ ๊ตฌํ๋ ์ค์ ์ต๋ง๋ฆฌ๊ฐ ใ์์ฐ๋กใ์ ๊ธ์ ๊ฑฐ๋ก ํ์ฌ ๋นํํ์, ์ธ์ข
๋์์ด ๋งฅ๋ถ์ ๋์ก๋ค๊ณ ํฉ๋๋ค. ์ด ์ฌ๊ฑด์ ์ฌ์์ ์ผ๋ก๋ ์กฐ์ ์์กฐ ์ด๊ธฐ ๋ฌธ์๋ถ์ ํ์ ์ฒด๊ณ์ ์ ์น์ ๊ฒฌํด์ ๋๋ฆฝ์ ๋
ธ์ถํ๋ ์ฌ๊ฑด์ด์ ๋ฌธํ์ฌ์ ์ธ ๋ฌธ์ ๋ก ์ด์ด์ก์ต๋๋ค. ์ด ์ฌ๊ฑด์ผ๋ก ์ธํด ์กฐ์ ์๋์ ๋ฌธ์๋ถ์ ์ฌ๋๋ถ์ ๊ฒฌํด ์ฐจ์ด์ ์ ๋
ธ์ถํ๋ ์ค์ํ ์ฌ๊ฑด์ผ๋ก ๊ธฐ๋ก๋๊ณ ์์ต๋๋ค. ์ฃผ๋ก ๋
ผ์ ๋ฐฐ๊ฒฝ์ ใ์ฃผํดใ์ ์ ์๋ ๋ด์ฉ ๋ถ์์ ๋ํ ์ด๊ฒฌ๊ณผ ์งํ์ ํ์ฌ๋ค์ ๋ฌธ์๊ด์ ๋ํ ๋นํ์ ์ค์ฌ์ผ๋ก ์ด๋ค์ก์ต๋๋ค. ์ด ์ฌ๊ฑด์ ์กฐ์ ์ด๊ธฐ ์์กฐ ๋ด๋ถ์ ๊ฒฌํด ์ฐจ์ด์ ๋ฐ๋ฅธ ํ์ ์ ์ธ ๋
ผ์์ผ๋ก ์์๋์ด ์ ์น์ ์ธ ์์ธ๊น์ง ๊ฒฐ๋ถํ์ฌ ์กฐ์ ์์กฐ ์ด๊ธฐ ๋ฌธํ์ ยท์ ์น์ ๋
ผ์์ผ๋ก ํ์ฅ๋์์ต๋๋ค. ์ด ์ฌ๊ฑด์ ๋น์์ ์์กฐ ๋ด๋ถ์ ๊ด๋ฃ์ ๋์ ์ธ๋ ฅ์ธต์ ๊ฐ๋ฑ์ ๋ณด์ฌ์ฃผ๋ ์ค์ํ ์๊ฐ ๋ฉ๋๋ค. ์ด ์ฌ๊ฑด์์ ์ฃผ๋ฅ ํํ์ ๋นํ๋ฟ ์๋๋ผ, ์ญ์ฌ์ ์ฌ์ค์ ์๊ณก์ ๋ํ ๋
ผ๋๋ ์ ๊ธฐ ๋ฉ๋๋ค. ์ค์ ๋ก ์ฌ๊ฐํ ์ฌ๋์ ์ด๋ํ๋ ๋๊ท๋ชจ์ ์ธ ๋
ผ์์ผ๋ก ํ์ฐ๋์ง ์์๋ค ํ๋๋ผ๋ ๋น์ ์ฌ๋๋ถ ๋ด๋ถ์ ๊ฒฌํด ์ฐจ์ด๋ ์ ์น์ ์ต์์ผ๋ก ์ธํ ์ฌํ์ ์ง๊ฐ ๋ณ๋์ ์ผ์ผํค๊ธฐ๋ ํ์ต๋๋ค. ๋น์์ ๋งฅ๋ถ ๋์ง์ ๋ฌธ์๋ถ์ ๊ฒฌํด๊ฐ ๊ฐ๋๋ ์ฌ๊ฑด์ด๋ผ ํ ์ ์์ต๋๋ค. ๋์ฑ์ด ๋น์์ ๋งฅ๋ถ ๋์ง์ ์ฌํ์ ์ง์์ธ์ธต์์๋ ์ด๊ฒฌ์ด ์์ ์ ์๋ ๊ฒ์ ๊ธฐ๋ณธ์
๋๋ค. ์ด ์ฌ๊ฑด์ผ๋ก ์ค๋๋ ๊น์ง๋ ํ๊ตญ์ ์ญ์ฌ์์ ์ค์ํ ์ฐ๊ตฌ ๋์์ด ๋๊ณ ์์ต๋๋ค. ๋งฅ๋ถ ์ด๋ผ๋ ๋ง์ ์ฌ๊ฑด์ ์ฃผ๋ฌด๋๋ผ๋ ์ฉ์ด๋ฅผ ๋ง๋ค์ด ๋น์์ ์ฌ๊ฑด์ ํํํ ์ ์์ต๋๋ค. ์ด ์ฌ๊ฑด์ ์ฃผ๋ก ์ ์นยทํ์ ยท์์ ๋ฑ ์ฌํ์ ยท๋ฌธํ์ ์ธ ์ฌ๋ฌ ๋ถ์ผ์์ ์ผ์ด๋๊ฒ ๋ฉ๋๋ค. ์ด ์ฌ๊ฑด์ ์ด๋
์ ๊ด์ ์ ์ฐจ์ด์์ ์์๋์ง๋ง, ์ฌํ์ ๋๋ฆฝ๊ณผ ๊ฐ๋ฑ๊น์ง ์ด๋ํ๋ ์ฌ๊ฑด์ผ๋ก ๊ฐ์ฃผํ๊ธฐ๋ ํฉ๋๋ค. ์ด ๋ฌธ๊ฑด์ ๋ํ ์ฌํด์์ ํตํด ๋น์ ํ๊ตญ ์ฌํ๋ฅผ ์ดํดํ ์ ์์ต๋๋ค. ์ต๋ง๋ฆฌ๋ ๋ฌธ์๋ถ์ ๊ฒฌํด๋ฅผ ๋นํํ ์ ์๋ ๊ณต๊ฐ์ผ๋ก ๋ณด์์ง๋ง, ๋น์์ ๋งฅ๋ถ์ ์ ์น์ ยท๋ฌธ์์ ์ผ๋ก๋ ์ฌ๊ฐํ ๋ฌธ์ ๋ก ์ธ์ํ๊ฒ ๋ฉ๋๋ค. ์ด ์ฌ๊ฑด์ ๋น์์ ์ํฉ์ ์ดํดํ๋ ๋ฐ ๋์์ด ๋ฉ๋๋ค. ์ด๋ก ์ ์ด ์ฌํ์ ์ฃผ๋ฅ ํํ์ด๋ ๋ฌธํ์ ๋ฐฐ๊ฒฝ์ ์ฝ์ ์ ์๋ ์ค์ํ ์ฌ๊ฑด์ ์ฆ๊ฑฐ์
๋๋ค. ๋ฐ๋ผ์ ์ด ์ฌ๊ฑด์ ์ค๋๋ ๊น์ง๋ ๋ค์ํ ๋ฐฉ๋ฉด์์ ์ฐ๊ตฌ์ ๋์์ผ๋ก ๋จ์ ์์ต๋๋ค. ์ด ์ฌ๊ฑด์ด ์ ์ ์ผ๋ก ๋
ผ์๋ง์ด ์๋๋ผ๋ ์ ์ ์ฃผ์ํ ํ์๊ฐ ์์ต๋๋ค. ์ด์ ํจ๊ป ์ฌ๋๋ถ ๋ด๋ถ์ ๊ฒฌํด ์ฐจ์ด ๊ทธ๋ฆฌ๊ณ ์ง์ ๊ฐ๋ฑ์ ํญ๋์ ์๋ฏธ๋ฅผ ์ดํดํ๋๋ฐ ๋์์ด ๋ฉ๋๋ค. ์ด ์ฌ๊ฑด์ผ๋ก ์ธํด, ๋ฌธ์๋ถ์ ์ ์น์ ๊ฒฌํด์ ๋๋ฆฝ์ ํ๊ตญ ์ฌํ์ ๋ณํ์ ๋ํ ํ ์์์ ๋ณด์ฌ์ฃผ๋ ํ๋์ ์๋ฃ๊ฐ ๋ฉ๋๋ค. ์ด๋ก์จ ์ฐ๋ฆฌ๋ ์กฐ์ ์์กฐ ์ด๊ธฐ ๋ฌธํ์ ๋ฐฐ๊ฒฝ๊ณผ ์ ๊ถ์ ๊ฒฌํด๋ฅผ ํ์ธํ ์ ์์ต๋๋ค. ์ด ์ฌ๊ฑด์ ๋น์์ ํ๊ตญ ์ฌํ์ ๋ฌธํ์ ยท์ ์น์ ํ์ ๊ฐ์ด๋ฐ ํ๋์
๋๋ค. ์ด์ ๊ฐ์ ๋งฅ๋ถ ๋์ง ์ฌ๊ฑด์์ ์ฐ๋ฆฌ๋ ๋ค์ํ ๋ฐฉ์์ผ๋ก ํ๊ตญ์ฌ์ ์ญ์ฌ์ ์ธ ๋ฐฐ๊ฒฝ์ ํ์ธํ ์ ์์ต๋๋ค. ๋ง์ง๋ง์ผ๋ก ๋งฅ๋ถ ์ฌ๊ฑด์ ์ฌ๊ฑด ์์ฒด๊ฐ ์๋ ์ฌ๊ฑด์ ๋งฅ๋ฝ์์ ํ๊ตญ์ฌ์ ์ธ ๋ฐฐ๊ฒฝ์ ํ์
ํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค. ์ฐ๋ฆฌ๋ ์ด ์ฌ๊ฑด์ผ๋ก ๋น์์ ์ฌํ์์ ์ดํดํ๊ณ ๋ฌธํ์ ๋ฐฐ๊ฒฝ์ ํ์
ํ ์ ์์ต๋๋ค. ์ด์ ์์ ์ฌ๊ฑด์ ๋น์ ์ฌํ์ ์ค์์ฑ์ ๊ฐ์กฐํฉ๋๋ค. ์ด ์ฌ๊ฑด์ ์ฌํ์ฌ์ ์ผ๋ก ์ฐ๊ตฌํ์ฌ ๋น์ ์ฌํ์์ ์ค์์ฑ์ ํ์ธํ ์ ์์ต๋๋ค. ์ด ์ ์ ์ญ์ฌ์ ยท๋ฌธํ์ ๋ฐฐ๊ฒฝ์ ์ฌํด์ํ ์ ์์ต๋๋ค. ์ด ์ฌ๊ฑด์ ์กฐ์ ์์กฐ ์ด๊ธฐ์ ์ฃผ์ ์ฌ๊ฑด์ด ๋ฉ๋๋ค. ์ด ์ ์์ ์ค์ํ ์ฌ๊ฑด์ผ๋ก ์ธ์ ํฉ๋๋ค. ๋
ผ์์ ์ ๊ฐ๊น์ง๋ ์ด๋ณด๋ค ๋ ๋ณต์กํ ์ ์์ต๋๋ค. ๋งฅ๋ถ ๋์ง ์ฌ๊ฑด
```
```python
messages = [
{"role": "system", "content": "์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค."},
{"role": "user", "content": "๋๋ ์ค๋ ์ฌ๊ณผ 5๊ฐ๋ฅผ ๊ฐ๊ณ ์์ด. ๊ทธ๋ฆฌ๊ณ ์ด์ ๋๋ ์ฌ๊ณผ 2๊ฐ๋ฅผ ๋จน์์ง. ๋ด๊ฐ ์ง๊ธ ๊ฐ์ง ์ฌ๊ณผ๋ ๋ช ๊ฐ์ง?"},
]
---
์ฌ๊ณผ์ ์๋์ ํ์ธํ๊ฒ ์ต๋๋ค. ์ ๋ 5๊ฐ์ ์ฌ๊ณผ๊ฐ ์์ผ์ ๋ฐ, ๊ทธ ์ค 2๊ฐ๋ฅผ ์ด๋ฏธ ๋จน์์ผ๋ 3๊ฐ์ ์ฌ๊ณผ๊ฐ ๋จ์์์ต๋๋ค! ใ
ใ
. ๊ณ์ํด์ ๋จน์ ๊ณํ ์์์์? ๐
```
|
cardiffnlp/roberta-large-tweet-topic-single-all | cardiffnlp | "2022-09-30T18:03:25Z" | 894 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"dataset:cardiffnlp/tweet_topic_single",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-09-30T08:26:33Z" | ---
datasets:
- cardiffnlp/tweet_topic_single
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/roberta-large-tweet-topic-single-all
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_topic_single
type: cardiffnlp/tweet_topic_single
args: cardiffnlp/tweet_topic_single
split: test_2021
metrics:
- name: F1
type: f1
value: 0.896042528056704
- name: F1 (macro)
type: f1_macro
value: 0.8000614127334341
- name: Accuracy
type: accuracy
value: 0.896042528056704
pipeline_tag: text-classification
widget:
- text: "I'm sure the {@Tampa Bay Lightning@} wouldโve rather faced the Flyers but man does their experience versus the Blue Jackets this year and last help them a lot versus this Islanders team. Another meat grinder upcoming for the good guys"
example_title: "Example 1"
- text: "Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US."
example_title: "Example 2"
---
# cardiffnlp/roberta-large-tweet-topic-single-all
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single). This model is fine-tuned on `train_all` split and validated on `test_2021` split of tweet_topic.
Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
- F1 (micro): 0.896042528056704
- F1 (macro): 0.8000614127334341
- Accuracy: 0.896042528056704
### Usage
```python
from transformers import pipeline
pipe = pipeline("text-classification", "cardiffnlp/roberta-large-tweet-topic-single-all")
topic = pipe("Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US.")
print(topic)
```
### Reference
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
```
|
Skywork/Skywork-13B-base | Skywork | "2023-11-24T09:39:53Z" | 894 | 67 | transformers | [
"transformers",
"pytorch",
"skywork",
"text-generation",
"custom_code",
"arxiv:2310.19341",
"arxiv:2310.16713",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | "2023-10-22T08:30:31Z" | ---
license: other
license_name: license
license_link: >-
https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf
---
<!-- <div align="center">
<h1>
โจSkywork
</h1>
</div> -->
<div align="center"><img src="misc/skywork_logo.jpeg" width="550"/></div>
<p align="center">
๐จโ๐ป <a href="https://github.com/SkyworkAI/Skywork" target="_blank">Github</a> โข ๐ค <a href="https://huggingface.co/Skywork" target="_blank">Hugging Face</a>โข ๐ค <a href="https://modelscope.cn/organization/Skywork" target="_blank">ModelScope</a> โข ๐ฌ <a href="https://github.com/SkyworkAI/Skywork/blob/main/misc/wechat.png?raw=true" target="_blank">WeChat</a>โข ๐<a href="http://arxiv.org/abs/2310.19341" target="_blank">Tech Report</a>
</p>
<div align="center">
[๐ๅคฉๅทฅๅจ็บฟๅฏน่ฏๅนณๅฐๅทฒๆญฃๅผๅๅ
ฌไผๅผๆพ](https://sso.tiangong.cn/?redirect=https://model-platform.tiangong.cn/overview&client_id=200005)
</div>
<div align="center">
[](https://github.com/SkyworkAI/Skywork/stargazers)
[](https://github.com/SkyworkAI/Skywork/fork)
</div>
# ๆจกๅไป็ป๏ผIntroduction๏ผ
**Skywork-13B-Base**ๆจกๅๅจ้ซ่ดจ้ๆธ
ๆด่ฟๆปค็3.2ไธไบฟไธชๅค่ฏญ่จ๏ผไธป่ฆๆฏไธญๆๅ่ฑๆ๏ผๅไปฃ็ ๆฐๆฎไธ่ฟ่ก้ข่ฎญ็ป๏ผๅฎๅจๅค็ง่ฏๆตๅๅ็งๅบๅๆต่ฏไธ้ฝๅฑ็ฐไบๅ็ญ่งๆจกๆจกๅ็ๆไฝณๆๆใ
**Skywork-13B-Base**: The model was trained on a high-quality cleaned dataset consisting of 3.2 trillion multilingual data (mainly Chinese and English) and code. It has demonstrated the best performance among models of similar scale in various evaluations and benchmark tests.
ๅฆๆๆจๅธๆไบ่งฃๆดๅค็ไฟกๆฏ๏ผๅฆ่ฎญ็ปๆนๆก๏ผ่ฏไผฐๆนๆณ๏ผ่ฏทๅ่ๆไปฌ็[ๆๆฏๆฅๅ](http://arxiv.org/abs/2310.19341)๏ผ[Skymath](https://arxiv.org/abs/2310.16713)่ฎบๆ๏ผ[SkyworkMM](https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf)่ฎบๆใ
If you are interested in more training and evaluation details, please refer to our [technical report](http://arxiv.org/abs/2310.19341), [Skymath]((https://arxiv.org/skywork-tech-report)) paper and [SkyworkMM](https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf) paper.
## ่ฎญ็ปๆฐๆฎ๏ผTraining Data๏ผ
ๆไปฌ็ฒพๅฟๆญๅปบไบๆฐๆฎๆธ
ๆดๆต็จๅฏนๆๆฌไธญ็ไฝ่ดจ้ๆฐๆฎใๆๅฎณไฟกๆฏใๆๆไฟกๆฏ่ฟ่กๆธ
ๆด่ฟๆปคใๆไปฌ็Skywork-13B-Baseๆจกๅๆฏๅจๆธ
ๆดๅ็3.2TB้ซ่ดจ้ไธญใ่ฑใไปฃ็ ๆฐๆฎไธ่ฟ่ก่ฎญ็ป๏ผๅ
ถไธญ่ฑๆๅ ๆฏ52.2%๏ผไธญๆๅ ๆฏ39.6%๏ผไปฃ็ ๅ ๆฏ8%๏ผๅจๅ
ผ้กพไธญๆๅ่ฑๆไธ็่กจ็ฐ็ๅๆถ๏ผไปฃ็ ่ฝๅไน่ฝๆไฟ่ฏใ
We have developed a data cleaning pipeline with great care to effectively clean and filter low-quality data and eliminate harmful information from text data. Our Skywork-13B-Base model is trained on a dataset with 3.2TB tokens that consists of high-quality Chinese, English, and code data, all of which have been thoroughly cleaned. The English data comprises 52.2% of the dataset, the Chinese data accounts for 39.6%, and the code data makes up 8%. This comprehensive approach ensures optimal performance for both Chinese and English while also maintaining the ability to handle code.
| | Category | Percentage |
|-------------|------------------|------------|
| **English** | Webpages | 39.8% |
| | Books | 3.6% |
| | Academic Papers | 3.0% |
| | Encyclopedia | 0.5% |
| | Miscellany | 2.9% |
| **Chinese** | Webpages | 30.4% |
| | Social Media | 5.5% |
| | Encyclopedia | 0.8% |
| | Miscellany | 3.1% |
| **Other Lang.** | Encyclopedia | 2.4% |
| **Code** | Github | 8.0% |
## ๆจกๅ็ปๆ๏ผModel Structure๏ผ
ไธLlama-2-13Bๆจกๅๅฏนๆฏ๏ผๅคฉๅทฅSkywork-13Bๆจกๅ้็จ็ธๅฏนๆดๅ ็ฆ้ฟ็็ฝ็ป็ปๆ๏ผๅฑๆฐไธบ52ๅฑ๏ผๅๆถๅฐFFN DimๅHidden Dim็ผฉๅฐๅฐ12288ๅ4608๏ผไป่ไฟ่ฏๆจกๅๅๆฐ้ๅๅๅงLlama-2-13Bๆจกๅ็ธๅฝใๆ นๆฎๆไปฌๅๆๅฎ้ชๅฏนๆฏ๏ผ็ธๅฏน็ฆ้ฟ็็ฝ็ป็ปๆๅจๅคงBatch Size่ฎญ็ปไธๅฏไปฅๅๅพๆดๅฅฝ็ๆณๅๆๆใSkywork-13BๅLlama-2-13Bๆจกๅ็ๅฏนๆฏๅฆไธ๏ผ
Compared to the Llama2-13B model, the Skywork-13B model adopts a relatively thinner and deeper network structure with 52 layers. At the same time, the FFN Dim and Hidden Dim are reduced to 12288 and 4608, respectively, to ensure that the model has a similar number of parameters as the original Llama-13B model. Based on our preliminary experimental results, a relatively thinner and deeper network structure can achieve better generalization performance under large batch size training. The detailed comparison between the Skywork-13B and Llama-2-13B models is as follows:
| Model Structure | Llama2-13B | Skywork-13B |
|----------------------|:----:|:-----------:|
| Vocab. Size | 32,000 | 65,536 |
| Hidden Dim. | 5,120 | 4,608 |
| FFN Dim. | 13,696 | 12,288 |
| Head Dim. | 128 | 128 |
| Num. Heads | 40 | 36 |
| Num. Layers | 40 | 52 |
| Seq. Len. | 4,096 | 4,096 |
| Positional Embedding | RoPE | RoPE |
## ๅ่ฏๅจ๏ผTokenizer๏ผ
ๆไปฌไฝฟ็จByte-Pair Encoding๏ผBPE๏ผๅฏนๆฐๆฎ่ฟ่กๅ่ฏ๏ผ่ฏ่กจๅคงๅฐไธบ65536๏ผๅ
ถไธญๆไธๅญ็ฌฆๅๅญ่ฏไธบ32000ไธช๏ผๆฑๅญๅUnicode็ฌฆๅท8000ไธช๏ผๆฑ่ฏญ่ฏ่ฏญ25519ไธช๏ผๅฉไธ็17ไธชไธบไฟ็ๅญใ
We use Byte-Pair Encoding (BPE) to tokenize the data, with a vocabulary size of 65536. Among them, there are 32000 Latin characters and subwords, 8000 Chinese characters and Unicode symbols, 25519 Chinese words, and the remaining 17 are reserved words.
| Category | Size |
|---------------------------------|--------|
| Latin based words & subwords | 32000 |
| Chinese characters & Unicode symbols | 8000 |
| Chinese words | 25519 |
| Reserved symbols | 17 |
| **Total** | **65536** |
# ๆจกๅ่ฏไผฐ๏ผEvaluation๏ผ
## ้ขๅๆฐๆฎๅฐๆๅบฆ่ฏไผฐ๏ผPerplexity Evaluaiton๏ผ
่ฏญ่จๆจกๅ่ฎญ็ป็ๆฌ่ดจไธๆฏ่ฎฉ้ขๆตไธไธไธช่ฏๆดๅ็กฎใๅบไบ่ฟไธช่ฎค็ฅ๏ผๆไปฌ่ฎคไธบ่ฏไผฐๅบ็กๅคงๆจกๅไธไธช้่ฆ็ๆนๅผๆฏ่ฏไผฐๅจๅๅคง้ขๅไธ่ฏญ่จๆจกๅ็ๆๆ็ซ ็ๆฆ็ใๅจๆจกๅ่ฎญ็ปไธญ้ขๆตไธไธไธช่ฏ็ๆฆ็ไธ่ฌไฝฟ็จCross Entropyๆๅคฑๅฝๆฐ๏ผๆดไฝ็ๆๅคฑๅฝๆฐไธบๆฏไธชไฝ็ฝฎ้ขๆต็ๅฎ่ฏๆๅคฑ็ๅนณๅ๏ผๅๆ๏ผ
$$loss = -\sum^{n}_{i=1} log(p_i) / n = -log( \prod_{i=1}^n p_i) / n$$
ๅ
ถไธญ$n$ๆฏๆๆกฃ็้ฟๅบฆ๏ผๅณtokenๆฐ๏ผ$p_i$ๆฏไฝ็ฝฎiไธ็ๅฎ่ฏ็ๆฆ็๏ผๆไปฌ็ฅ้ๆๆกฃไธญๆฏไธไธชไฝ็ฝฎไธ็ๅฎ่ฏ็ๆฆ็็่ไนๅไธบ็ๆ่ฏฅๆๆกฃ็ๆฆ็๏ผๅฆๆญคๆไปฌๅฐฑๅฐlossๅ็ๆๆ็ซ ็ๆฆ็่็ณปๅจไบไธ่ตทใ่ไธๅๆจกๅๅ ไธบไฝฟ็จ็ๅ่ฏๅจไธๅ๏ผๅ
ทๆไธๅ็tokenๆฐ๏ผๅ ๆญคๅฏนๆๅคฑๅฝๆฐไนไปฅtokenๆฐ็ฎ$n$๏ผ่ฟๆ ทๅฐฑไป
่่็ๆๆ็ซ ็ๆฆ็้จๅ๏ผไธๅๆจกๅไนๅฏไปฅ่ฟ่กๆฏ่พใๆไปฌๅฐๆ ๅๅๅlossๅๆๆฐ่ฝฌๆขๆperplexity๏ผไฝฟๅพๆจกๅ็ๅทฎๅผๆดๅ ๅฏ่ฏปใไธบไบ้
่ฏปๆนไพฟๅ็ปญๆๅฐ็lossๅpplไธบๆจกๅๆ ๅๅๅ็lossๅperplexityใ
ๅบไบไธ่ฟฐๅๆ๏ผๆไปฌๅฏนๅฏนๅคไธช้ขๅ็ญ้ๅบ2023ๅนด9ๆไปฝๆฐๅๅธ็ๅ ็พๅฐไธๅ็ฏ้ซ่ดจ้ๆ็ซ ๏ผๅนถไบบๅทฅ่ฟ่กไบๆ ธๅฏนใไฟ่ฏๆๆ็ๆต่ฏๆฐๆฎไธๅจๅคฉๅทฅๆจกๅไปฅๅๅ
ถไปๆๆๆจกๅ็่ฎญ็ป้ไธญ๏ผๅนถไธๆต่ฏๆฐๆฎ็ๆฅๆบไน่ถณๅคๅนฟๆณ๏ผ่ดจ้ไน้ซใๆไปฌๅฏไปฅ้ๅๅฝๅๆๆฐ็ๆ็ซ ่ฏๆตไธๅๆจกๅ็ppl๏ผๆจกๅๅพ้พไฝๅผใ
ไธๅพๅๅบไบไธๅๅผๆบๆจกๅ๏ผๅคฉๅทฅSkywork-13B-Baseๅๅพๆไผๆๆ๏ผ่ฏๆไบๆไปฌ็Baseๆจกๅ็ๅบ็ก่ฝๅๅคไบๅฝๅ
ๅผๆบๆจกๅไธญๆๆๅผบๆฐดๅนณใ
We have chosen several hundred to thousands of high-quality articles that were published after September 1, 2023 across various fields. We have manually verified these articles to ensure their quality. It is important to note that none of the test data used in evaluating the Skywork model or any other models is included in their training set. Furthermore, the test data is diverse and of high quality, making it challenging for the models to gain an unfair advantage.
The figure below displays the performance of different open source models. Skywork-13B-Base achieves the best results.
| | Tech | Movie | Gov. | Game | Finance | General | Average |
|------------------|-------|-------|-------|-------|---------|---------|---------|
| MOSS-7B | 20.83 | 39.66 | 11.08 | 31.24 | 10.59 | 13.25 | 18.50 |
| InternLM-7B | 13.43 | 24.90 | 5.88 | 19.78 | 6.17 | 8.10 | 11.17 |
| Qwen-7B | 13.39 | 25.16 | 5.55 | 19.26 | 5.76 | 7.78 | 10.83 |
| Baichuan2-7B | 12.89 | 23.26 | 5.34 | 18.36 | 5.68 | 7.62 | 10.41 |
| LLaMA2-13B | 23.26 | 50.66 | 18.09 | 32.52 | 14.85 | 16.55 | 23.54 |
| Xverse-13B | 12.55 | 23.49 | 5.20 | 17.69 | 5.54 | 7.46 | 10.19 |
| Baichuan-13B | 12.38 | 22.46 | 5.21 | 17.59 | 5.42 | 7.37 | 10.03 |
| Baichuan2-13B | 12.14 | 21.85 | 5.05 | 17.15 | 5.35 | 7.24 | 9.81 |
| Qwen-14B | 11.90 | 22.43 | 4.89 | **16.94** | 5.24 | 7.03 | 9.67 |
| InternLM-20B | 12.34 | 22.06 | 5.75 | 17.45 | 5.73 | 7.78 | 10.34 |
| Aquila2-34B | 14.62 | 29.09 | 5.72 | 21.78 | 5.83 | 8.45 | 11.73 |
| Skywork-13B-Base | **11.58** | **21.84** | **4.76** | 17.28 | **4.92** | **6.82** | **9.42** |
### ่ฏๆตๆฐๆฎๅ่ฏๆต่ๆฌ๏ผLoss Evaluation๏ผ
ๆไปฌๅฐ่ฏๆตๆฐๆฎๅ่ฏๆต่ๆฌไน่ฟ่กไบๅผๆบ๏ผไธ่ฝฝgithubไธ็ไปฃ็ ่ฟ่กไธ้ขๅฝไปคๅๅฏไปฅๅค็ฐๆไปฌ็็ปๆใ
We have also open-sourced the data and evaluation scripts. You can reproduce our results by running the following command.
```
bash bash_scripts/skywork_eval_loss.sh
```
## Benchmark่ฏไผฐ๏ผBenchmark Results๏ผ
ๆไปฌ่ฏไผฐไบๅๅคงๆๅจ่ฏๆตbenchmarkไธ็็ปๆไฝไธบๅ่๏ผๅ
ๆฌC-Eval๏ผMMLU๏ผCMMLU๏ผGSM8Kใ้ตๅพชไนๅ็่ฏไผฐๆต็จ๏ผC-EvalใMMLUใCMMLUๆต่ฏ5-shot็ปๆ๏ผGSM8Kๆต่ฏ8-shot็ปๆใๅฏไปฅ็ๅฐSkywork-13B-Baseๆจกๅๅจไธญๆๅผๆบๆจกๅไธญๅคไบๅๅ๏ผๅจๅ็ญๅๆฐ่งๆจกไธไธบๆไผๆฐดๅนณใ
We evaluated Skywork-13B-Base on several popular benchmarks, including C-Eval, MMLU, CMMLU, and GSM8K. Following the previous evaluation process, we tested the 5-shot results of C-Eval, MMLU, and CMMLU, and the 8-shot results of GSM8K. It can be seen that the Skywork-13B-Base model is among the top models in the Chinese open source model community, performing at an optimal level with the same parameter scale.
| Model | C-Eval | CMMLU | MMLU | GSM8K |
|-------------------------|:-----:|:---------------:|:----------:|:-------:|
| LLaMA-1-13B-Base | 35.5 | 31.2 | 46.9 | 17.8 |
| Open-LLaMA-13B | 27.1 | 26.7 | 42.7 | 12.4 |
| LLaMA-2-13B-Base | 36.5 | 36.6 | 54.8 | 28.7 |
| InternLM-20B | 58.8 | - | 62.0 | 52.6 |
| Qwen-14B-Base | 72.1 | 71.0 | 66.3 | 61.3 |
| Aquila2-34B-Base | 63.1 | 71.4 | 64.2 | 58.4 |
| XVERSE-13B-Base | 54.7 | - | 55.1 | - |
| Baichuan-13B-Base | 52.4 | 55.3 | 51.6 | 26.6 |
| Baichuan-2-13B-Base | 58.1 | 62.0 | 59.2 | 52.3 |
| Skywork-13B-Base (ours) | 60.6 | 61.8 | 62.1 | 55.8 |
## Benchmark่ฏไผฐ่ฏฆ็ป็ปๆ
ๆไปฌ็ปๅบ**Skywork-13B-Base**ๆจกๅๅจC-Eval๏ผCMMLU๏ผMMLUไธๆจกๅ็่ฏฆ็ป็ปๆใ
We provide detailed results of the Skywork-13B-Base model on C-EVAL, CMMLU, and MMLU.
| Benchmark | **STEM** | **Humanities** | **Social Science** | **Other** | **China Specific** | **Hard** | **Average** |
|:-----:|:---------:|:--------:|:-------------:|:--------:|:--------:|:--------:|:--------:|
| **C-EVAL** | 51.2 | 67.8 | 74.6 | 57.5 | - | 39.4 | 60.6 |
| **CMMLU** | 49.5 | 69.3 | 65.9 | 63.3 | 64.2 | - | 61.8 |
| **MMLU** | 51.6 | 58.0 | 72.5 | 68.8 | - | - | 62.1 |
# ๅฟซ้ๅผๅง๏ผQuickstart๏ผ
ๆไปฌๅฐๆจกๅๅๆฐใ้
็ฝฎๆไปถใtokenizer็ญๅจhuggingfaceๅmodelscopeไธ่ฟ่กไบๅผๆบใ
We have open-sourced the model parameters, configuration files, tokenizer, and more on Huggingface and Modelscope.
## ไพ่ตๅฎ่ฃ
๏ผRequirements๏ผ
- Python 3.8ๅไปฅไธ็ๆฌ
- Pytorch 2.0ๅไปฅไธ็ๆฌ
- CUDAๅปบ่ฎฎไฝฟ็จ11.4ไปฅไธ็ๆฌใ
Skywork-13B-Baseๆจกๅ๏ผSkywork-13B-ChatๆจกๅๅSkywork-13B-Mathๆจกๅ่ฟ่กไธ้ข็่ๆฌ่ฟ่กPythonไพ่ตๅฎ่ฃ
ใ
- Python 3.8 and above
- Pytorch 2.0 and above
- CUDA 11.4 and above are recommended.
Skywork-13B-Base model, Skywork-13B-Chat model, and Skywork-13B-Math model run the following script for Python dependency installation:
```shell
pip install -r requirements.txt
```
## Huggingfaceๆจกๅๆต่ฏ๏ผDemonstration๏ผ
### Base ๆจกๅๆจ็๏ผBase Model Inference๏ผ
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> from transformers.generation import GenerationConfig
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("SkyworkAI/Skywork-13B-Base", trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("SkyworkAI/Skywork-13B-Base", device_map="auto", trust_remote_code=True).eval()
>>> inputs = tokenizer('้่ฅฟ็็ไผๆฏ่ฅฟๅฎ', return_tensors='pt').to(model.device)
>>> response = model.generate(inputs.input_ids, max_length=128)
>>> print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
้่ฅฟ็็ไผๆฏ่ฅฟๅฎ๏ผ่ฅฟๅฎๆฏๆๅฝ่ๅ็ๅค้ฝ๏ผๅจๅๅฒไธๆๅไธไธชๆไปฃๅจๆญคๅปบ้ฝ๏ผๆไปฅ่ฅฟๅฎๅ่ขซ็งฐไธบโๅไธๆๅค้ฝโใ่ฅฟๅฎๆฏๆๅฝ่ๅ็ๆ
ๆธธๅๅธ๏ผๆฏๅนด้ฝๆๅคง้็ๆธธๅฎขๆฅๅฐ่ฅฟๅฎๆ
ๆธธ๏ผ่ฅฟๅฎ็ๆ
ๆธธ่ตๆบ้ๅธธไธฐๅฏ๏ผๆๅพๅค่ๅ็ๆ
ๆธธๆฏ็น๏ผๆฏๅฆ็งฆๅง็ๅ
ต้ฉฌไฟใๅคง้ๅกใๅๆธ
ๆฑ ใๅคงๅ่่ๅญใ่ฅฟๅฎๅๅขใๅคงๆๅฎซๅฝๅฎถ้ๅๅ
ฌๅญใ่ฅฟๅฎ็ขๆๅ็ฉ้ฆใ่ฅฟๅฎ้ๆฅผใ่ฅฟๅฎ้ผๆฅผใ่ฅฟๅฎๅๅกๅ็ฉ้ฆใ่ฅฟๅฎๅคงๅ
ดๅๅฏบใ่ฅฟๅฎๅฐ้ๅก
>>> inputs = tokenizer('้่ฅฟ็็ไผๆฏ่ฅฟๅฎ๏ผ็่็็ไผๆฏๅ
ฐๅท๏ผๆฒณๅ็็ไผๆฏ้ๅท', return_tensors='pt').to(model.device)
>>> response = model.generate(inputs.input_ids, max_length=128)
>>> print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
้่ฅฟ็็ไผๆฏ่ฅฟๅฎ๏ผ็่็็ไผๆฏๅ
ฐๅท๏ผๆฒณๅ็็ไผๆฏ้ๅท๏ผๆนๅ็็ไผๆฏๆญฆๆฑ๏ผๆนๅ็็ไผๆฏ้ฟๆฒ๏ผๆฑ่ฅฟ็็ไผๆฏๅๆ๏ผๅฎๅพฝ็็ไผๆฏๅ่ฅ๏ผๆฑ่็็ไผๆฏๅไบฌ๏ผๆตๆฑ็็ไผๆฏๆญๅท๏ผ็ฆๅปบ็็ไผๆฏ็ฆๅท๏ผๅนฟไธ็็ไผๆฏๅนฟๅท๏ผๅนฟ่ฅฟ็็ไผๆฏๅๅฎ๏ผๆตทๅ็็ไผๆฏๆตทๅฃ๏ผๅๅท็็ไผๆฏๆ้ฝ๏ผ่ดตๅท็็ไผๆฏ่ดต้ณ๏ผไบๅ็็ไผๆฏๆๆ๏ผ่ฅฟ่็็ไผๆฏๆ่จ๏ผ้ๆตท็็ไผๆฏ่ฅฟๅฎ๏ผๅฎๅค็็ไผๆฏ้ถๅท๏ผๆฐ็็็ไผๆฏไน้ฒๆจ้ฝใ
```
# ๆจกๅๅพฎ่ฐ๏ผFine-tuning๏ผ
## ๅ
จ้ๅพฎ่ฐ๏ผFull-parameter Fine-tuning๏ผ
ไฝฟ็จSkywork-13B-Baseๆจกๅ่ฟ่ก้ข่ฎญ็ปๅพฎ่ฐ
```bash
## preprocess continue pretraining data
## Because pre-training data is usually large, we use a script to process the training data separately.
python train/pt_data_preprocess.py \
-t $MODEL_PATH \
-i data/pt_train.jsonl \
-o data_cache/pt_train_demo
## launch training
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export MODEL_PATH=skywork-13b-models/skywork-13b-base
export DATA_CACHE_DIR=data_cache/pt_train_demo/pt_train
bash bash_scripts/skywork_13b_pt.sh
```
ไฝฟ็จSkywork-13B-Baseๆจกๅ่ฟ่กๆ็็ฃๅพฎ่ฐ๏ผSFT, Supevise Fine-tuning๏ผ
```bash
## preprocess data and launch training
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export SFT_DATA_DIR=data/sft_data
export DATA_CACHE_DIR=data_cache/sft_train_demo
bash bash_scripts/skywork_13b_sft.sh
```
## LoRAๅพฎ่ฐ๏ผPEFT๏ผ
ไฝฟ็จSkywork-13B-BaseๆจกๅไปฅๅLoRA่ฟ่ก้ข่ฎญ็ปๅพฎ่ฐ
```bash
## preprocess continue pretraining data
## Because pre-training data is usually large, we use a script to process the training data separately.
python train/pt_data_preprocess.py \
-t $MODEL_PATH \
-i data/pt_train.jsonl \
-o data_cache/pt_train_demo
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export MODEL_PATH=skywork-13b-models/skywork-13b-base
export DATA_CACHE_DIR=data_cache/pt_train_demo/pt_train
bash bash_scripts/skywork_13b_pt_lora.sh
```
ไฝฟ็จSkywork-13B-BaseๆจกๅไปฅๅLoRA่ฟ่กๆ็็ฃๅพฎ่ฐ๏ผSFT, Supevise Fine-tuning๏ผ
```bash
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export SFT_DATA_DIR=data/sft_data
export DATA_CACHE_DIR=data_cache/sft_train_demo
bash bash_scripts/skywork_13b_sft_lora.sh
```
# ๅฃฐๆๅๅ่ฎฎ๏ผDeclaration and License Agreement๏ผ
## ๅฃฐๆ๏ผDeclaration๏ผ
ๆไปฌๅจๆญคๅฃฐๆ๏ผไธ่ฆๅฉ็จSkyworkๆจกๅ่ฟ่กไปปไฝๅฑๅฎณๅฝๅฎถ็คพไผๅฎๅ
จๆ่ฟๆณ็ๆดปๅจใๅฆๅค๏ผๆไปฌไน่ฆๆฑไฝฟ็จ่
ไธ่ฆๅฐ Skywork ๆจกๅ็จไบๆช็ป้ๅฝๅฎๅ
จๅฎกๆฅๅๅคๆก็ไบ่็ฝๆๅกใๆไปฌๅธๆๆๆ็ไฝฟ็จ่
้ฝ่ฝ้ตๅฎ่ฟไธชๅๅ๏ผ็กฎไฟ็งๆ็ๅๅฑ่ฝๅจ่ง่ๅๅๆณ็็ฏๅขไธ่ฟ่กใ
ๆไปฌๅทฒ็ปๅฐฝๆไปฌๆ่ฝ๏ผๆฅ็กฎไฟๆจกๅ่ฎญ็ป่ฟ็จไธญไฝฟ็จ็ๆฐๆฎ็ๅ่งๆงใ็ถ่๏ผๅฐฝ็ฎกๆไปฌๅทฒ็ปๅๅบไบๅทจๅคง็ๅชๅ๏ผไฝ็ฑไบๆจกๅๅๆฐๆฎ็ๅคๆๆง๏ผไปๆๅฏ่ฝๅญๅจไธไบๆ ๆณ้ข่ง็้ฎ้ขใๅ ๆญค๏ผๅฆๆ็ฑไบไฝฟ็จskyworkๅผๆบๆจกๅ่ๅฏผ่ด็ไปปไฝ้ฎ้ข๏ผๅ
ๆฌไฝไธ้ไบๆฐๆฎๅฎๅ
จ้ฎ้ขใๅ
ฌๅ
ฑ่่ฎบ้ฃ้ฉ๏ผๆๆจกๅ่ขซ่ฏฏๅฏผใๆปฅ็จใไผ ๆญๆไธๅฝๅฉ็จๆๅธฆๆฅ็ไปปไฝ้ฃ้ฉๅ้ฎ้ข๏ผๆไปฌๅฐไธๆฟๆ
ไปปไฝ่ดฃไปปใ
We hereby declare that the Skywork model should not be used for any activities that pose a threat to national or societal security or engage in unlawful actions. Additionally, we request users not to deploy the Skywork model for internet services without appropriate security reviews and records. We hope that all users will adhere to this principle to ensure that technological advancements occur in a regulated and lawful environment.
We have done our utmost to ensure the compliance of the data used during the model's training process. However, despite our extensive efforts, due to the complexity of the model and data, there may still be unpredictable risks and issues. Therefore, if any problems arise as a result of using the Skywork open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, abused, disseminated, or improperly utilized, we will not assume any responsibility.
## ๅ่ฎฎ๏ผLicense Agreement๏ผ
็คพๅบไฝฟ็จSkyworkๆจกๅ้่ฆ้ตๅพช[ใSkywork ๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎใ](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20ๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎ.pdf)ใSkyworkๆจกๅๆฏๆๅไธ็จ้๏ผๅฆๆๆจ่ฎกๅๅฐSkyworkๆจกๅๆๅ
ถ่ก็ๅ็จไบๅไธ็ฎ็๏ผๆ ้ๅๆฌก็ณ่ฏท๏ผ ไฝ่ฏทๆจไป็ป้
่ฏป[ใSkywork ๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎใ](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20ๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎ.pdf)ๅนถไธฅๆ ผ้ตๅฎ็ธๅ
ณๆกๆฌพใ
The community usage of Skywork model requires [Skywork Community License](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf). The Skywork model supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within [Skywork Community License](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf).
[ใSkywork ๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎใใ]:https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20ๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎ.pdf
[[email protected]]: mailto:[email protected]
# ๅผ็จๅ่็ณปๆไปฌ๏ผContact Us and Citation๏ผ
ๅฆๆๆจ่งๅพๆไปฌ็ๅทฅไฝๅฏนๆจๆๅธฎๅฉ๏ผๆฌข่ฟๅผ็จๆไปฌ็่ฎบๆ~
If you find our work helpful, please feel free to cite our paper~
```
@misc{wei2023skywork,
title={Skywork: A More Open Bilingual Foundation Model},
author={Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei Lรผ and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou},
year={2023},
eprint={2310.19341},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{skyworkmath,
title={SkyMath: Technical Report},
author={Liu Yang, Haihua Yang, Wenjun Cheng, Lei Lin, Chenxia Li, Yifu Chen, Lunan Liu, Jianfei Pan, Tianwen Wei, Biye Li, Liang Zhao, Lijie Wang, Bo Zhu, Guoliang Li, Xuejie Wu, Xilin Luo, Rui Hu},
journal={arXiv preprint arXiv: 2310.16713},
url={https://arxiv.org/abs/2310.16713},
year={2023}
}
```
```
@article{Skywork_Multi-Modal_Group_Empirical_Study_Towards_2023,
author = {Skywork Multi-Modal Group},
month = sep,
title = {{Empirical Study Towards Building An Effective Multi-Modal Large Language Model}},
year = {2023}
}
```
|
PowerInfer/ReluLLaMA-7B-PowerInfer-GGUF | PowerInfer | "2023-12-26T08:53:33Z" | 894 | 12 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"license:llama2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | "2023-12-15T17:17:54Z" | ---
license: llama2
language:
- en
---
# ReluLLaMA-7B-PowerInfer-GGUF
- Original model: [SparseLLM/ReluLLaMA-7B](https://huggingface.co/SparseLLM/ReluLLaMA-7B)
- Converted & distributed by: [PowerInfer](https://huggingface.co/PowerInfer)
This model is the downstream distribution of [SparseLLM/ReluLLaMA-7B](https://huggingface.co/SparseLLM/ReluLLaMA-7B) in PowerInfer GGUF format consisting of the LLM model weights and predictor weights. |
RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf | RichardErkhov | "2024-05-01T05:32:20Z" | 894 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-05-01T03:46:06Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral-7b-grok - GGUF
- Model creator: https://huggingface.co/HuggingFaceH4/
- Original model: https://huggingface.co/HuggingFaceH4/mistral-7b-grok/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral-7b-grok.Q2_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q2_K.gguf) | Q2_K | 2.53GB |
| [mistral-7b-grok.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [mistral-7b-grok.IQ3_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [mistral-7b-grok.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mistral-7b-grok.IQ3_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mistral-7b-grok.Q3_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q3_K.gguf) | Q3_K | 3.28GB |
| [mistral-7b-grok.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mistral-7b-grok.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mistral-7b-grok.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [mistral-7b-grok.Q4_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mistral-7b-grok.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mistral-7b-grok.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mistral-7b-grok.Q4_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q4_K.gguf) | Q4_K | 4.07GB |
| [mistral-7b-grok.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mistral-7b-grok.Q4_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q4_1.gguf) | Q4_1 | 4.24GB |
| [mistral-7b-grok.Q5_0.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q5_0.gguf) | Q5_0 | 4.65GB |
| [mistral-7b-grok.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [mistral-7b-grok.Q5_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q5_K.gguf) | Q5_K | 4.78GB |
| [mistral-7b-grok.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mistral-7b-grok.Q5_1.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mistral-7b-grok.Q6_K.gguf](https://huggingface.co/RichardErkhov/HuggingFaceH4_-_mistral-7b-grok-gguf/blob/main/mistral-7b-grok.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- HuggingFaceH4/grok-conversation-harmless
- HuggingFaceH4/ultrachat_200k
model-index:
- name: mistral-7b-grok
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral 7B Grok
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that has been aligned via Constitutional AI to mimic the style of [xAI's Grok assistant](https://grok.x.ai/).
It achieves the following results on the evaluation set:
- Loss: 0.9348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9326 | 1.0 | 545 | 0.9348 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
openerotica/Llama-3-lima-nsfw-16k-test-GPTQ | openerotica | "2024-05-17T14:40:07Z" | 894 | 2 | transformers | [
"transformers",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-05-16T17:08:42Z" | ---
license: llama3
---
Quantized with these parameters:
--bits 4
--group_size 128
--desc_act 1
--damp 0.1
--seqlen 16384
--num_samples 512
Quantization Dataset: Erotiquant XL |
Ransss/llama3-8B-DarkIdol-2.0-Uncensored-Q8_0-GGUF | Ransss | "2024-06-27T18:28:38Z" | 894 | 1 | null | [
"gguf",
"roleplay",
"llama3",
"sillytavern",
"idol",
"llama-cpp",
"gguf-my-repo",
"en",
"ja",
"zh",
"base_model:aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored",
"license:llama3",
"region:us"
] | null | "2024-06-27T18:27:57Z" | ---
base_model: aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored
language:
- en
- ja
- zh
license: llama3
tags:
- roleplay
- llama3
- sillytavern
- idol
- llama-cpp
- gguf-my-repo
---
# Ransss/llama3-8B-DarkIdol-2.0-Uncensored-Q8_0-GGUF
This model was converted to GGUF format from [`aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored`](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Ransss/llama3-8B-DarkIdol-2.0-Uncensored-Q8_0-GGUF --hf-file llama3-8b-darkidol-2.0-uncensored-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Ransss/llama3-8B-DarkIdol-2.0-Uncensored-Q8_0-GGUF --hf-file llama3-8b-darkidol-2.0-uncensored-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Ransss/llama3-8B-DarkIdol-2.0-Uncensored-Q8_0-GGUF --hf-file llama3-8b-darkidol-2.0-uncensored-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Ransss/llama3-8B-DarkIdol-2.0-Uncensored-Q8_0-GGUF --hf-file llama3-8b-darkidol-2.0-uncensored-q8_0.gguf -c 2048
```
|
studio-ousia/luke-japanese-large | studio-ousia | "2022-11-09T02:18:56Z" | 893 | 6 | transformers | [
"transformers",
"pytorch",
"luke",
"fill-mask",
"named entity recognition",
"entity typing",
"relation classification",
"question answering",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-11-07T14:25:53Z" | ---
language: ja
thumbnail: https://github.com/studio-ousia/luke/raw/master/resources/luke_logo.png
tags:
- luke
- named entity recognition
- entity typing
- relation classification
- question answering
license: apache-2.0
---
## luke-japanese-large
**luke-japanese** is the Japanese version of **LUKE** (**L**anguage
**U**nderstanding with **K**nowledge-based **E**mbeddings), a pre-trained
_knowledge-enhanced_ contextualized representation of words and entities. LUKE
treats words and entities in a given text as independent tokens, and outputs
contextualized representations of them. Please refer to our
[GitHub repository](https://github.com/studio-ousia/luke) for more details and
updates.
This model contains Wikipedia entity embeddings which are not used in general
NLP tasks. Please use the
[lite version](https://huggingface.co/studio-ousia/luke-japanese-large-lite/)
for tasks that do not use Wikipedia entities as inputs.
**luke-japanese**ใฏใๅ่ชใจใจใณใใฃใใฃใฎ็ฅ่ญๆกๅผตๅ่จ็ทดๆธใฟ Transformer ใขใใซ**LUKE**ใฎๆฅๆฌ่ช็ใงใใLUKE ใฏๅ่ชใจใจใณใใฃใใฃใ็ฌ็ซใใใใผใฏใณใจใใฆๆฑใใใใใใฎๆ่ใ่ๆ
ฎใใ่กจ็พใๅบๅใใพใใ่ฉณ็ดฐใซใคใใฆใฏใ[GitHub ใชใใธใใช](https://github.com/studio-ousia/luke)ใๅ็
งใใฆใใ ใใใ
ใใฎใขใใซใฏใ้ๅธธใฎ NLP ใฟในใฏใงใฏไฝฟใใใชใ Wikipedia ใจใณใใฃใใฃใฎใจใณใใใฃใณใฐใๅซใใงใใพใใๅ่ชใฎๅ
ฅๅใฎใฟใไฝฟใใฟในใฏใซใฏใ[lite version](https://huggingface.co/studio-ousia/luke-japanese-large-lite/)ใไฝฟ็จใใฆใใ ใใใ
### Experimental results on JGLUE
The experimental results evaluated on the dev set of
[JGLUE](https://github.com/yahoojapan/JGLUE) is shown as follows:
| Model | MARC-ja | JSTS | JNLI | JCommonsenseQA |
| ----------------------------- | --------- | ------------------- | --------- | -------------- |
| | acc | Pearson/Spearman | acc | acc |
| **LUKE Japanese large** | **0.965** | **0.932**/**0.902** | **0.927** | 0.893 |
| _Baselines:_ | |
| Tohoku BERT large | 0.955 | 0.913/0.872 | 0.900 | 0.816 |
| Waseda RoBERTa large (seq128) | 0.954 | 0.930/0.896 | 0.924 | **0.907** |
| Waseda RoBERTa large (seq512) | 0.961 | 0.926/0.892 | 0.926 | 0.891 |
| XLM RoBERTa large | 0.964 | 0.918/0.884 | 0.919 | 0.840 |
The baseline scores are obtained from
[here](https://github.com/yahoojapan/JGLUE/blob/a6832af23895d6faec8ecf39ec925f1a91601d62/README.md).
### Citation
```latex
@inproceedings{yamada2020luke,
title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention},
author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto},
booktitle={EMNLP},
year={2020}
}
```
|
thenlper/gte-small-zh | thenlper | "2024-05-19T12:35:14Z" | 893 | 13 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"mteb",
"sentence-similarity",
"Sentence Transformers",
"en",
"arxiv:2308.03281",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2023-11-08T08:37:18Z" | ---
tags:
- mteb
- sentence-similarity
- sentence-transformers
- Sentence Transformers
model-index:
- name: gte-small-zh
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 35.80906032378281
- type: cos_sim_spearman
value: 36.688967176174415
- type: euclidean_pearson
value: 35.70701955438158
- type: euclidean_spearman
value: 36.6889470691436
- type: manhattan_pearson
value: 35.832741768286944
- type: manhattan_spearman
value: 36.831888591957195
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 44.667266488330384
- type: cos_sim_spearman
value: 45.77390794946174
- type: euclidean_pearson
value: 48.14272832901943
- type: euclidean_spearman
value: 45.77390569666109
- type: manhattan_pearson
value: 48.187667158563094
- type: manhattan_spearman
value: 45.80979161966117
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.690000000000005
- type: f1
value: 36.868257131984016
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 49.03674224607541
- type: cos_sim_spearman
value: 49.63568854885055
- type: euclidean_pearson
value: 49.47441886441355
- type: euclidean_spearman
value: 49.63567815431205
- type: manhattan_pearson
value: 49.76480072909559
- type: manhattan_spearman
value: 49.977789367288224
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 39.538126779019755
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 37.333105487031766
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 86.08142426347963
- type: mrr
value: 88.04269841269841
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 87.25694119382474
- type: mrr
value: 89.36853174603175
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.913999999999998
- type: map_at_10
value: 35.913000000000004
- type: map_at_100
value: 37.836
- type: map_at_1000
value: 37.952000000000005
- type: map_at_3
value: 31.845000000000002
- type: map_at_5
value: 34.0
- type: mrr_at_1
value: 36.884
- type: mrr_at_10
value: 44.872
- type: mrr_at_100
value: 45.899
- type: mrr_at_1000
value: 45.945
- type: mrr_at_3
value: 42.331
- type: mrr_at_5
value: 43.674
- type: ndcg_at_1
value: 36.884
- type: ndcg_at_10
value: 42.459
- type: ndcg_at_100
value: 50.046
- type: ndcg_at_1000
value: 52.092000000000006
- type: ndcg_at_3
value: 37.225
- type: ndcg_at_5
value: 39.2
- type: precision_at_1
value: 36.884
- type: precision_at_10
value: 9.562
- type: precision_at_100
value: 1.572
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 21.122
- type: precision_at_5
value: 15.274
- type: recall_at_1
value: 23.913999999999998
- type: recall_at_10
value: 52.891999999999996
- type: recall_at_100
value: 84.328
- type: recall_at_1000
value: 98.168
- type: recall_at_3
value: 37.095
- type: recall_at_5
value: 43.396
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 68.91160553217077
- type: cos_sim_ap
value: 76.45769658379533
- type: cos_sim_f1
value: 72.07988702844463
- type: cos_sim_precision
value: 63.384779137839274
- type: cos_sim_recall
value: 83.53986439092822
- type: dot_accuracy
value: 68.91160553217077
- type: dot_ap
value: 76.47279917239219
- type: dot_f1
value: 72.07988702844463
- type: dot_precision
value: 63.384779137839274
- type: dot_recall
value: 83.53986439092822
- type: euclidean_accuracy
value: 68.91160553217077
- type: euclidean_ap
value: 76.45768544225383
- type: euclidean_f1
value: 72.07988702844463
- type: euclidean_precision
value: 63.384779137839274
- type: euclidean_recall
value: 83.53986439092822
- type: manhattan_accuracy
value: 69.21226698737222
- type: manhattan_ap
value: 76.6623683693766
- type: manhattan_f1
value: 72.14058164628506
- type: manhattan_precision
value: 64.35643564356435
- type: manhattan_recall
value: 82.06686930091185
- type: max_accuracy
value: 69.21226698737222
- type: max_ap
value: 76.6623683693766
- type: max_f1
value: 72.14058164628506
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 48.419000000000004
- type: map_at_10
value: 57.367999999999995
- type: map_at_100
value: 58.081
- type: map_at_1000
value: 58.108000000000004
- type: map_at_3
value: 55.251
- type: map_at_5
value: 56.53399999999999
- type: mrr_at_1
value: 48.472
- type: mrr_at_10
value: 57.359
- type: mrr_at_100
value: 58.055
- type: mrr_at_1000
value: 58.082
- type: mrr_at_3
value: 55.303999999999995
- type: mrr_at_5
value: 56.542
- type: ndcg_at_1
value: 48.472
- type: ndcg_at_10
value: 61.651999999999994
- type: ndcg_at_100
value: 65.257
- type: ndcg_at_1000
value: 65.977
- type: ndcg_at_3
value: 57.401
- type: ndcg_at_5
value: 59.681
- type: precision_at_1
value: 48.472
- type: precision_at_10
value: 7.576
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 21.25
- type: precision_at_5
value: 13.888
- type: recall_at_1
value: 48.419000000000004
- type: recall_at_10
value: 74.97399999999999
- type: recall_at_100
value: 92.202
- type: recall_at_1000
value: 97.893
- type: recall_at_3
value: 63.541000000000004
- type: recall_at_5
value: 68.994
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.328
- type: map_at_10
value: 69.11
- type: map_at_100
value: 72.47
- type: map_at_1000
value: 72.54599999999999
- type: map_at_3
value: 46.938
- type: map_at_5
value: 59.56
- type: mrr_at_1
value: 81.35
- type: mrr_at_10
value: 87.066
- type: mrr_at_100
value: 87.212
- type: mrr_at_1000
value: 87.21799999999999
- type: mrr_at_3
value: 86.558
- type: mrr_at_5
value: 86.931
- type: ndcg_at_1
value: 81.35
- type: ndcg_at_10
value: 78.568
- type: ndcg_at_100
value: 82.86099999999999
- type: ndcg_at_1000
value: 83.628
- type: ndcg_at_3
value: 76.716
- type: ndcg_at_5
value: 75.664
- type: precision_at_1
value: 81.35
- type: precision_at_10
value: 38.545
- type: precision_at_100
value: 4.657
- type: precision_at_1000
value: 0.484
- type: precision_at_3
value: 69.18299999999999
- type: precision_at_5
value: 58.67
- type: recall_at_1
value: 22.328
- type: recall_at_10
value: 80.658
- type: recall_at_100
value: 94.093
- type: recall_at_1000
value: 98.137
- type: recall_at_3
value: 50.260000000000005
- type: recall_at_5
value: 66.045
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 43.1
- type: map_at_10
value: 52.872
- type: map_at_100
value: 53.556000000000004
- type: map_at_1000
value: 53.583000000000006
- type: map_at_3
value: 50.14999999999999
- type: map_at_5
value: 51.925
- type: mrr_at_1
value: 43.1
- type: mrr_at_10
value: 52.872
- type: mrr_at_100
value: 53.556000000000004
- type: mrr_at_1000
value: 53.583000000000006
- type: mrr_at_3
value: 50.14999999999999
- type: mrr_at_5
value: 51.925
- type: ndcg_at_1
value: 43.1
- type: ndcg_at_10
value: 57.907
- type: ndcg_at_100
value: 61.517999999999994
- type: ndcg_at_1000
value: 62.175000000000004
- type: ndcg_at_3
value: 52.425
- type: ndcg_at_5
value: 55.631
- type: precision_at_1
value: 43.1
- type: precision_at_10
value: 7.380000000000001
- type: precision_at_100
value: 0.9129999999999999
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 19.667
- type: precision_at_5
value: 13.36
- type: recall_at_1
value: 43.1
- type: recall_at_10
value: 73.8
- type: recall_at_100
value: 91.3
- type: recall_at_1000
value: 96.39999999999999
- type: recall_at_3
value: 59.0
- type: recall_at_5
value: 66.8
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 41.146594844170835
- type: f1
value: 28.544218732704845
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 82.83302063789868
- type: ap
value: 48.881798834997056
- type: f1
value: 77.28655923994657
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 66.05467125345538
- type: cos_sim_spearman
value: 72.71921060562211
- type: euclidean_pearson
value: 71.28539457113986
- type: euclidean_spearman
value: 72.71920173126693
- type: manhattan_pearson
value: 71.23750818174456
- type: manhattan_spearman
value: 72.61025268693467
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 26.127712982639483
- type: mrr
value: 24.87420634920635
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 62.517
- type: map_at_10
value: 71.251
- type: map_at_100
value: 71.647
- type: map_at_1000
value: 71.665
- type: map_at_3
value: 69.28
- type: map_at_5
value: 70.489
- type: mrr_at_1
value: 64.613
- type: mrr_at_10
value: 71.89
- type: mrr_at_100
value: 72.243
- type: mrr_at_1000
value: 72.259
- type: mrr_at_3
value: 70.138
- type: mrr_at_5
value: 71.232
- type: ndcg_at_1
value: 64.613
- type: ndcg_at_10
value: 75.005
- type: ndcg_at_100
value: 76.805
- type: ndcg_at_1000
value: 77.281
- type: ndcg_at_3
value: 71.234
- type: ndcg_at_5
value: 73.294
- type: precision_at_1
value: 64.613
- type: precision_at_10
value: 9.142
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 26.781
- type: precision_at_5
value: 17.149
- type: recall_at_1
value: 62.517
- type: recall_at_10
value: 85.997
- type: recall_at_100
value: 94.18299999999999
- type: recall_at_1000
value: 97.911
- type: recall_at_3
value: 75.993
- type: recall_at_5
value: 80.88300000000001
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.27706792199058
- type: f1
value: 56.77545011902468
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.47948890383321
- type: f1
value: 66.4502180376861
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 54.2
- type: map_at_10
value: 59.858
- type: map_at_100
value: 60.46
- type: map_at_1000
value: 60.507
- type: map_at_3
value: 58.416999999999994
- type: map_at_5
value: 59.331999999999994
- type: mrr_at_1
value: 54.2
- type: mrr_at_10
value: 59.862
- type: mrr_at_100
value: 60.463
- type: mrr_at_1000
value: 60.51
- type: mrr_at_3
value: 58.416999999999994
- type: mrr_at_5
value: 59.352000000000004
- type: ndcg_at_1
value: 54.2
- type: ndcg_at_10
value: 62.643
- type: ndcg_at_100
value: 65.731
- type: ndcg_at_1000
value: 67.096
- type: ndcg_at_3
value: 59.727
- type: ndcg_at_5
value: 61.375
- type: precision_at_1
value: 54.2
- type: precision_at_10
value: 7.140000000000001
- type: precision_at_100
value: 0.8619999999999999
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 21.166999999999998
- type: precision_at_5
value: 13.5
- type: recall_at_1
value: 54.2
- type: recall_at_10
value: 71.39999999999999
- type: recall_at_100
value: 86.2
- type: recall_at_1000
value: 97.2
- type: recall_at_3
value: 63.5
- type: recall_at_5
value: 67.5
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 68.19666666666666
- type: f1
value: 67.58581661416034
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 60.530590146182995
- type: cos_sim_ap
value: 63.53656091243922
- type: cos_sim_f1
value: 68.09929603556874
- type: cos_sim_precision
value: 52.45433789954338
- type: cos_sim_recall
value: 97.04329461457233
- type: dot_accuracy
value: 60.530590146182995
- type: dot_ap
value: 63.53660452157237
- type: dot_f1
value: 68.09929603556874
- type: dot_precision
value: 52.45433789954338
- type: dot_recall
value: 97.04329461457233
- type: euclidean_accuracy
value: 60.530590146182995
- type: euclidean_ap
value: 63.53678735855631
- type: euclidean_f1
value: 68.09929603556874
- type: euclidean_precision
value: 52.45433789954338
- type: euclidean_recall
value: 97.04329461457233
- type: manhattan_accuracy
value: 60.47644829453167
- type: manhattan_ap
value: 63.5622508250315
- type: manhattan_f1
value: 68.1650700073692
- type: manhattan_precision
value: 52.34861346915677
- type: manhattan_recall
value: 97.67687434002113
- type: max_accuracy
value: 60.530590146182995
- type: max_ap
value: 63.5622508250315
- type: max_f1
value: 68.1650700073692
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.13
- type: ap
value: 87.21879260137172
- type: f1
value: 89.12359325300508
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 12.035577637900758
- type: cos_sim_spearman
value: 12.76524190663864
- type: euclidean_pearson
value: 14.4012689427106
- type: euclidean_spearman
value: 12.765328992583608
- type: manhattan_pearson
value: 14.458505202938946
- type: manhattan_spearman
value: 12.763238700117896
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 34.809415339934006
- type: cos_sim_spearman
value: 36.96728615916954
- type: euclidean_pearson
value: 35.56113673772396
- type: euclidean_spearman
value: 36.96842963389308
- type: manhattan_pearson
value: 35.5447066178264
- type: manhattan_spearman
value: 36.97514513480951
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.39448692338551
- type: cos_sim_spearman
value: 66.72211526923901
- type: euclidean_pearson
value: 65.72981824553035
- type: euclidean_spearman
value: 66.72211526923901
- type: manhattan_pearson
value: 65.52315559414296
- type: manhattan_spearman
value: 66.61931702511545
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 76.73608064460915
- type: cos_sim_spearman
value: 76.51424826130031
- type: euclidean_pearson
value: 76.17930213372487
- type: euclidean_spearman
value: 76.51342756283478
- type: manhattan_pearson
value: 75.87085607319342
- type: manhattan_spearman
value: 76.22676341477134
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 65.38779931543048
- type: mrr
value: 74.79313763420059
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 25.131999999999998
- type: map_at_10
value: 69.131
- type: map_at_100
value: 72.943
- type: map_at_1000
value: 73.045
- type: map_at_3
value: 48.847
- type: map_at_5
value: 59.842
- type: mrr_at_1
value: 85.516
- type: mrr_at_10
value: 88.863
- type: mrr_at_100
value: 88.996
- type: mrr_at_1000
value: 89.00099999999999
- type: mrr_at_3
value: 88.277
- type: mrr_at_5
value: 88.64800000000001
- type: ndcg_at_1
value: 85.516
- type: ndcg_at_10
value: 78.122
- type: ndcg_at_100
value: 82.673
- type: ndcg_at_1000
value: 83.707
- type: ndcg_at_3
value: 80.274
- type: ndcg_at_5
value: 78.405
- type: precision_at_1
value: 85.516
- type: precision_at_10
value: 38.975
- type: precision_at_100
value: 4.833
- type: precision_at_1000
value: 0.509
- type: precision_at_3
value: 70.35
- type: precision_at_5
value: 58.638
- type: recall_at_1
value: 25.131999999999998
- type: recall_at_10
value: 76.848
- type: recall_at_100
value: 91.489
- type: recall_at_1000
value: 96.709
- type: recall_at_3
value: 50.824000000000005
- type: recall_at_5
value: 63.89
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 49.65
- type: f1
value: 47.66791473245483
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 63.78843565968542
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 55.14095244943176
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 53.800000000000004
- type: map_at_10
value: 63.312000000000005
- type: map_at_100
value: 63.93600000000001
- type: map_at_1000
value: 63.955
- type: map_at_3
value: 61.283
- type: map_at_5
value: 62.553000000000004
- type: mrr_at_1
value: 53.800000000000004
- type: mrr_at_10
value: 63.312000000000005
- type: mrr_at_100
value: 63.93600000000001
- type: mrr_at_1000
value: 63.955
- type: mrr_at_3
value: 61.283
- type: mrr_at_5
value: 62.553000000000004
- type: ndcg_at_1
value: 53.800000000000004
- type: ndcg_at_10
value: 67.693
- type: ndcg_at_100
value: 70.552
- type: ndcg_at_1000
value: 71.06099999999999
- type: ndcg_at_3
value: 63.632
- type: ndcg_at_5
value: 65.90899999999999
- type: precision_at_1
value: 53.800000000000004
- type: precision_at_10
value: 8.129999999999999
- type: precision_at_100
value: 0.943
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 23.467
- type: precision_at_5
value: 15.18
- type: recall_at_1
value: 53.800000000000004
- type: recall_at_10
value: 81.3
- type: recall_at_100
value: 94.3
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 70.39999999999999
- type: recall_at_5
value: 75.9
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 84.96000000000001
- type: ap
value: 66.89917287702019
- type: f1
value: 83.0239988458119
language:
- en
license: mit
---
# gte-small-zh
General Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281)
The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer different sizes of models for both Chinese and English Languages. The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
## Model List
| Models | Language | Max Sequence Length | Dimension | Model Size |
|:-----: | :-----: |:-----: |:-----: |:-----: |
|[GTE-large-zh](https://huggingface.co/thenlper/gte-large-zh) | Chinese | 512 | 1024 | 0.67GB |
|[GTE-base-zh](https://huggingface.co/thenlper/gte-base-zh) | Chinese | 512 | 512 | 0.21GB |
|[GTE-small-zh](https://huggingface.co/thenlper/gte-small-zh) | Chinese | 512 | 512 | 0.10GB |
|[GTE-large](https://huggingface.co/thenlper/gte-large) | English | 512 | 1024 | 0.67GB |
|[GTE-base](https://huggingface.co/thenlper/gte-base) | English | 512 | 512 | 0.21GB |
|[GTE-small](https://huggingface.co/thenlper/gte-small) | English | 512 | 384 | 0.10GB |
## Metrics
We compared the performance of the GTE models with other popular text embedding models on the MTEB (CMTEB for Chinese language) benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
- Evaluation results on CMTEB
| Model | Model Size (GB) | Embedding Dimensions | Sequence Length | Average (35 datasets) | Classification (9 datasets) | Clustering (4 datasets) | Pair Classification (2 datasets) | Reranking (4 datasets) | Retrieval (8 datasets) | STS (8 datasets) |
| ------------------- | -------------- | -------------------- | ---------------- | --------------------- | ------------------------------------ | ------------------------------ | --------------------------------------- | ------------------------------ | ---------------------------- | ------------------------ |
| **gte-large-zh** | 0.65 | 1024 | 512 | **66.72** | 71.34 | 53.07 | 81.14 | 67.42 | 72.49 | 57.82 |
| gte-base-zh | 0.20 | 768 | 512 | 65.92 | 71.26 | 53.86 | 80.44 | 67.00 | 71.71 | 55.96 |
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | 65.13 | 69.05 | 49.16 | 82.68 | 66.41 | 70.14 | 58.66 |
| stella-large-zh | 0.65 | 1024 | 1024 | 64.54 | 67.62 | 48.65 | 78.72 | 65.98 | 71.02 | 58.3 |
| bge-large-zh-v1.5 | 1.3 | 1024 | 512 | 64.53 | 69.13 | 48.99 | 81.6 | 65.84 | 70.46 | 56.25 |
| stella-base-zh-v2 | 0.21 | 768 | 1024 | 64.36 | 68.29 | 49.4 | 79.96 | 66.1 | 70.08 | 56.92 |
| stella-base-zh | 0.21 | 768 | 1024 | 64.16 | 67.77 | 48.7 | 76.09 | 66.95 | 71.07 | 56.54 |
| piccolo-large-zh | 0.65 | 1024 | 512 | 64.11 | 67.03 | 47.04 | 78.38 | 65.98 | 70.93 | 58.02 |
| piccolo-base-zh | 0.2 | 768 | 512 | 63.66 | 66.98 | 47.12 | 76.61 | 66.68 | 71.2 | 55.9 |
| gte-small-zh | 0.1 | 512 | 512 | 60.04 | 64.35 | 48.95 | 69.99 | 66.21 | 65.50 | 49.72 |
| bge-small-zh-v1.5 | 0.1 | 512 | 512 | 57.82 | 63.96 | 44.18 | 70.4 | 60.92 | 61.77 | 49.1 |
| m3e-base | 0.41 | 768 | 512 | 57.79 | 67.52 | 47.68 | 63.99 | 59.54| 56.91 | 50.47 |
|text-embedding-ada-002(openai) | - | 1536| 8192 | 53.02 | 64.31 | 45.68 | 69.56 | 54.28 | 52.0 | 43.35 |
## Usage
Code example
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
input_texts = [
"ไธญๅฝ็้ฆ้ฝๆฏๅช้",
"ไฝ ๅๆฌขๅปๅช้ๆ
ๆธธ",
"ๅไบฌ",
"ไปๅคฉไธญๅๅไปไน"
]
tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-small-zh")
model = AutoModel.from_pretrained("thenlper/gte-small-zh")
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = outputs.last_hidden_state[:, 0]
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
```
Use with sentence-transformers:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['That is a happy person', 'That is a very happy person']
model = SentenceTransformer('thenlper/gte-small-zh')
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
### Limitation
This model exclusively caters to Chinese texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
### Citation
If you find our paper or models helpful, please consider citing them as follows:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
```
|
xai-org/grok-1 | xai-org | "2024-03-28T16:25:32Z" | 893 | 2,126 | grok | [
"grok",
"grok-1",
"text-generation",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-03-17T05:47:21Z" | ---
license: apache-2.0
pipeline_tag: text-generation
library_name: grok
tags:
- grok-1
---
# Grok-1
This repository contains the weights of the Grok-1 open-weights model. You can find the code in the [GitHub Repository](https://github.com/xai-org/grok-1/tree/main).
# Download instruction
Clone the repo & download the `int8` checkpoint to the `checkpoints` directory by executing this command in the repo root directory:
```shell
git clone https://github.com/xai-org/grok-1.git && cd grok-1
pip install huggingface_hub[hf_transfer]
huggingface-cli download xai-org/grok-1 --repo-type model --include ckpt-0/* --local-dir checkpoints --local-dir-use-symlinks False
```
Then, you can run:
```shell
pip install -r requirements.txt
python run.py
```
You should be seeing output from the language model.
Due to the large size of the model (314B parameters), a multi-GPU machine is required to test the model with the example code.
p.s. we're hiring: https://x.ai/careers |
huaweilin/rapidin-alpaca-llama2-7b | huaweilin | "2024-06-11T06:12:54Z" | 893 | 1 | peft | [
"peft",
"en",
"dataset:tatsu-lab/alpaca",
"arxiv:2405.11724",
"license:apache-2.0",
"region:us"
] | null | "2024-06-10T22:26:37Z" | ---
library_name: peft
license: apache-2.0
datasets:
- tatsu-lab/alpaca
language:
- en
---
An example model for paper "[Token-wise Influential Training Data Retrieval for Large Language Models](https://arxiv.org/abs/2405.11724)" (Accepted at ACL 2024)
[[GitHub Repo](https://github.com/huawei-lin/RapidIn)]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0 |
apanc/russian-sensitive-topics | apanc | "2021-05-18T22:41:20Z" | 892 | 17 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"toxic comments classification",
"ru",
"arxiv:2103.05345",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
language:
- ru
tags:
- toxic comments classification
licenses:
- cc-by-nc-sa
---
## General concept of the model
This model is trained on the dataset of sensitive topics of the Russian language. The concept of sensitive topics is described [in this article ](https://www.aclweb.org/anthology/2021.bsnlp-1.4/) presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference. Please note that this article describes the first version of the dataset, while the model is trained on the extended version of the dataset open-sourced on our [GitHub](https://github.com/skoltech-nlp/inappropriate-sensitive-topics/blob/main/Version2/sensitive_topics/sensitive_topics.csv) or on [kaggle](https://www.kaggle.com/nigula/russian-sensitive-topics). The properties of the dataset is the same as the one described in the article, the only difference is the size.
## Instructions
The model predicts combinations of 18 sensitive topics described in the [article](https://arxiv.org/abs/2103.05345). You can find step-by-step instructions for using the model [here](https://github.com/skoltech-nlp/inappropriate-sensitive-topics/blob/main/Version2/sensitive_topics/Inference.ipynb)
## Metrics
The dataset partially manually labeled samples and partially semi-automatically labeled samples. Learn more in our article. We tested the performance of the classifier only on the part of manually labeled data that is why some topics are not well represented in the test set.
| | precision | recall | f1-score | support |
|-------------------|-----------|--------|----------|---------|
| offline_crime | 0.65 | 0.55 | 0.6 | 132 |
| online_crime | 0.5 | 0.46 | 0.48 | 37 |
| drugs | 0.87 | 0.9 | 0.88 | 87 |
| gambling | 0.5 | 0.67 | 0.57 | 6 |
| pornography | 0.73 | 0.59 | 0.65 | 204 |
| prostitution | 0.75 | 0.69 | 0.72 | 91 |
| slavery | 0.72 | 0.72 | 0.73 | 40 |
| suicide | 0.33 | 0.29 | 0.31 | 7 |
| terrorism | 0.68 | 0.57 | 0.62 | 47 |
| weapons | 0.89 | 0.83 | 0.86 | 138 |
| body_shaming | 0.9 | 0.67 | 0.77 | 109 |
| health_shaming | 0.84 | 0.55 | 0.66 | 108 |
| politics | 0.68 | 0.54 | 0.6 | 241 |
| racism | 0.81 | 0.59 | 0.68 | 204 |
| religion | 0.94 | 0.72 | 0.81 | 102 |
| sexual_minorities | 0.69 | 0.46 | 0.55 | 102 |
| sexism | 0.66 | 0.64 | 0.65 | 132 |
| social_injustice | 0.56 | 0.37 | 0.45 | 181 |
| none | 0.62 | 0.67 | 0.64 | 250 |
| micro avg | 0.72 | 0.61 | 0.66 | 2218 |
| macro avg | 0.7 | 0.6 | 0.64 | 2218 |
| weighted avg | 0.73 | 0.61 | 0.66 | 2218 |
| samples avg | 0.75 | 0.66 | 0.68 | 2218 |
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@inproceedings{babakov-etal-2021-detecting,
title = "Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company{'}s Reputation",
author = "Babakov, Nikolay and
Logacheva, Varvara and
Kozlova, Olga and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.4",
pages = "26--36",
abstract = "Not all topics are equally {``}flammable{''} in terms of toxicity: a calm discussion of turtles or fishing less often fuels inappropriate toxic dialogues than a discussion of politics or sexual minorities. We define a set of sensitive topics that can yield inappropriate and toxic messages and describe the methodology of collecting and labelling a dataset for appropriateness. While toxicity in user-generated data is well-studied, we aim at defining a more fine-grained notion of inappropriateness. The core of inappropriateness is that it can harm the reputation of a speaker. This is different from toxicity in two respects: (i) inappropriateness is topic-related, and (ii) inappropriate message is not toxic but still unacceptable. We collect and release two datasets for Russian: a topic-labelled dataset and an appropriateness-labelled dataset. We also release pre-trained classification models trained on this data.",
}
``` |
NewstaR/StableGalen-6b | NewstaR | "2023-09-18T11:59:55Z" | 892 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"medicine",
"doctor",
"custom_code",
"en",
"dataset:Photolens/MedText-DoctorLLaMa-OpenOrca-formatted",
"dataset:shibing624/medical",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-18T11:10:59Z" | ---
license: other
datasets:
- Photolens/MedText-DoctorLLaMa-OpenOrca-formatted
- shibing624/medical
language:
- en
tags:
- medicine
- doctor
---
# This model is the DeciLM-6b-Instruct model, trained specifically for medicine
Galen uses the
```
### User: {prompt}
### Response:
```
or
```
{prompt}
```
Prompt templates
# Galen Training Recipe:
- target_modules = ["q_proj", "v_proj", "gate_proj", "down_proj", "up_proj", "k_proj", "o_proj"]
- Learning Rate: 4e-4
- LR Scheduler: constant
- 250 Steps
<img src="Loss.png" alt="Loss" width="600" height="400" />
## T3: 1 Hour |
rrivera1849/LUAR-CRUD | rrivera1849 | "2024-03-28T21:56:02Z" | 892 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"LUAR",
"feature-extraction",
"custom_code",
"en",
"license:apache-2.0",
"region:us"
] | feature-extraction | "2023-09-22T14:02:34Z" | ---
license: apache-2.0
language:
- en
---
# rrivera1849/LUAR-CRUD
Author Style Representations using [LUAR](https://aclanthology.org/2021.emnlp-main.70.pdf).
The LUAR training and evaluation repository can be found [here](https://github.com/llnl/luar).
This particular model was trained on a subsample of the Pushshift Reddit Dataset (5 million users) for comments published between January 2015 and October 2019 by authors publishing at least 100 comments during that period.
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("rrivera1849/LUAR-CRUD")
model = AutoModel.from_pretrained("rrivera1849/LUAR-CRUD")
# we embed `episodes`, a colletion of documents presumed to come from an author
# NOTE: make sure that `episode_length` consistent across `episode`
batch_size = 3
episode_length = 16
text = [
["Foo"] * episode_length,
["Bar"] * episode_length,
["Zoo"] * episode_length,
]
text = [j for i in text for j in i]
tokenized_text = tokenizer(
text,
max_length=32,
padding="max_length",
truncation=True,
return_tensors="pt"
)
# inputs size: (batch_size, episode_length, max_token_length)
tokenized_text["input_ids"] = tokenized_text["input_ids"].reshape(batch_size, episode_length, -1)
tokenized_text["attention_mask"] = tokenized_text["attention_mask"].reshape(batch_size, episode_length, -1)
print(tokenized_text["input_ids"].size()) # torch.Size([3, 16, 32])
print(tokenized_text["attention_mask"].size()) # torch.Size([3, 16, 32])
out = model(**tokenized_text)
print(out.size()) # torch.Size([3, 512])
# to get the Transformer attentions:
out, attentions = model(**tokenized_text, output_attentions=True)
print(attentions[0].size()) # torch.Size([48, 12, 32, 32])
```
## Citing & Authors
If you find this model helpful, feel free to cite our [publication](https://aclanthology.org/2021.emnlp-main.70.pdf).
```
@inproceedings{uar-emnlp2021,
author = {Rafael A. Rivera Soto and Olivia Miano and Juanita Ordonez and Barry Chen and Aleem Khan and Marcus Bishop and Nicholas Andrews},
title = {Learning Universal Authorship Representations},
booktitle = {EMNLP},
year = {2021},
}
```
## License
LUAR is distributed under the terms of the Apache License (Version 2.0).
All new contributions must be made under the Apache-2.0 licenses. |
TheBloke/Marcoroni-7B-v3-GGUF | TheBloke | "2023-12-11T00:11:04Z" | 892 | 21 | transformers | [
"transformers",
"gguf",
"mistral",
"base_model:AIDC-ai-business/Marcoroni-7B-v3",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | "2023-12-10T16:27:03Z" | ---
base_model: AIDC-ai-business/Marcoroni-7B-v3
inference: false
license: apache-2.0
model_creator: AIDC-ai-business
model_name: Marcoroni 7B V3
model_type: mistral
prompt_template: '### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Marcoroni 7B V3 - GGUF
- Model creator: [AIDC-ai-business](https://huggingface.co/AIDC-ai-business)
- Original model: [Marcoroni 7B V3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3)
<!-- description start -->
## Description
This repo contains GGUF format model files for [AIDC-ai-business's Marcoroni 7B V3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Marcoroni-7B-v3-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF)
* [AIDC-ai-business's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca-InstructOnly2
```
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [marcoroni-7b-v3.Q2_K.gguf](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF/blob/main/marcoroni-7b-v3.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [marcoroni-7b-v3.Q3_K_S.gguf](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF/blob/main/marcoroni-7b-v3.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [marcoroni-7b-v3.Q3_K_M.gguf](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF/blob/main/marcoroni-7b-v3.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [marcoroni-7b-v3.Q3_K_L.gguf](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF/blob/main/marcoroni-7b-v3.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [marcoroni-7b-v3.Q4_0.gguf](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF/blob/main/marcoroni-7b-v3.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [marcoroni-7b-v3.Q4_K_S.gguf](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF/blob/main/marcoroni-7b-v3.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [marcoroni-7b-v3.Q4_K_M.gguf](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF/blob/main/marcoroni-7b-v3.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [marcoroni-7b-v3.Q5_0.gguf](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF/blob/main/marcoroni-7b-v3.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [marcoroni-7b-v3.Q5_K_S.gguf](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF/blob/main/marcoroni-7b-v3.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [marcoroni-7b-v3.Q5_K_M.gguf](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF/blob/main/marcoroni-7b-v3.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [marcoroni-7b-v3.Q6_K.gguf](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF/blob/main/marcoroni-7b-v3.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [marcoroni-7b-v3.Q8_0.gguf](https://huggingface.co/TheBloke/Marcoroni-7B-v3-GGUF/blob/main/marcoroni-7b-v3.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Marcoroni-7B-v3-GGUF and below it, a specific filename to download, such as: marcoroni-7b-v3.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Marcoroni-7B-v3-GGUF marcoroni-7b-v3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Marcoroni-7B-v3-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Marcoroni-7B-v3-GGUF marcoroni-7b-v3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m marcoroni-7b-v3.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./marcoroni-7b-v3.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./marcoroni-7b-v3.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, ้ฟๆ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjรคreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: AIDC-ai-business's Marcoroni 7B V3
# Marcoroni-7B-v3
<img src="https://cdn-uploads.huggingface.co/production/uploads/637aebed7ce76c3b834cea37/20uN0wMu2zTyVGgXV9PIo.png" width = 60%>
# Model Details
* **Trained by**: trained by AIDC AI-Business.
* **Model type:** **Marcoroni-7B-v3** is an auto-regressive language model based on mistralai/Mistral-7B-v0.1.
* **Language(s)**: English
This model is a fine-tuned model based on mistralai/Mistral-7B-v0.1 on the open source dataset Open-Orca/SlimOrca and meta-math/MetaMathQA.
Then we align it with DPO algorithm.
# Prompting
## Prompt Template for alpaca style
```
### Instruction:
<prompt> (without the <>)
### Response:
```
<!-- original-model-card end -->
|
froggeric/WestLake-10.7B-v2 | froggeric | "2024-03-13T17:08:23Z" | 892 | 20 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"base_model:senseable/WestLake-7B-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-12T10:36:15Z" | ---
base_model:
- senseable/WestLake-7B-v2
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
---
# WestLake-10.7B-v2: Role-Play & Text Generation Specialist Model
[GGUF version available here](https://huggingface.co/froggeric/WestLake-10.7B-v2-GGUF)\
EXL2 versions available here:
[3.3bpw](https://huggingface.co/StopTryharding/WestLake-10.7B-v2-exl2-3.3) / [4.0bpw](https://huggingface.co/StopTryharding/WestLake-10.7B-v2-exl2-4.0) / [5.0bpw](https://huggingface.co/StopTryharding/WestLake-10.7B-v2-exl2-5.0) / [6.0bpw](https://huggingface.co/StopTryharding/WestLake-10.7B-v2-exl2-6.0) / [8.0bpw](https://huggingface.co/StopTryharding/WestLake-10.7B-v2-exl2-8.0)
This is my first viable self-merge of the fantastic WestLake-7B-v2 model, obtained after more than 12 rounds of testing different
merge configurations. In my [LLM Creativity Benchmark](https://huggingface.co/datasets/froggeric/creativity), it greatly improves over the original 7B model, and ranks between miqu-1-120b
and goliath-120b! I would describe the improvements as a better writing style, with more details. It has a bit more difficulties following instructions, but not by much.
It is also the first model I have tested to obtain a perfect score with the following test:
```
Write a sequence of nominal groups that flow into one another, using the following rules:
- each nominal group is made of exactly 3 words
- the first word of each nominal group must be the last word of the previous nominal group
- the first word of the first nominal group is: "ball"
- the last word of the last nominal group is: "stone"
- there must be a theme, of your choosing, pertaining to all nominal groups
- there must be exactly 7 nominal groups, leading from the first word (ball) to the last word (stone)
- a word already used at the beginning and end of a nominal group cannot be reused
Present your solution as a list numbered with roman numerals.
Finally, explain why you chose your specific theme.
```
## Usage
* Base model: senseable/WestLake-7B-v2 based of Mistral-7B-v0.1
* Context size: **8192** (even though Mistral-7B is 32k, WestLake was trained with 8k, and using a larger context is likely to cause problems)
* Prompt format: in general, Mistral based models are able to understand many prompt formats, but the following produce the best results, and are recommended (in order of preference)
- **Alpaca** (reported by senseable as working better than ChatML, and confirmed by me)
- ChatML (used during WestLake training)
- Mistral Instruct (original format from Mistral-7B)
- Zephyr (variant of ChatML which I have found to sometimes produce better results)
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).\
This model was merged using the passthrough merge method.\
The following models were included in the merge:
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- model: senseable/WestLake-7B-v2
layer_range: [0,9]
- sources:
- model: senseable/WestLake-7B-v2
layer_range: [5,14]
- sources:
- model: senseable/WestLake-7B-v2
layer_range: [10,19]
- sources:
- model: senseable/WestLake-7B-v2
layer_range: [15,24]
- sources:
- model: senseable/WestLake-7B-v2
layer_range: [20,32]
```
---
# Original model card: Westlake-7Bv2: Role-Play & Text Generation Specialist Model
**Update Notes:**
*Version 2 trained 1 additional epoch cycle for 3 total*
Welcome to the documentation of Westlake-7B, a cutting-edge language model designed for exceptional role-play and text generation tasks. This README file aims to provide an overview of our capabilities, usage guidelines, and potential applications.
## About Westlake-7Bv2
Westlake-7B is built upon a vast corpus of diverse texts, enabling it to generate contextually relevant responses in various scenarios. With its impressive size of 7 billion parameters, this model excels at understanding nuances in language and producing creative outputs.
### Key Features
1. **Role-Play**: Westlake-7Bv2 can seamlessly adapt to different character personas and engage in dynamic conversations while maintaining consistency throughout the interaction. It can generate believable dialogues across various genres, including fiction, non-fiction, historical events, or even fantasy worlds.
2. **Text Generation**: This model is proficient at generating original content such as stories, poems, essays, news articles, and more. Its ability to capture the essence of different writing styles makes it an ideal tool for creative writers seeking inspiration or assistance in their projects.
3. **Contextual Understanding**: Westlake-7B's extensive training allows it to comprehend complex contexts and generate responses that align with given situations. It can handle multiple topics simultaneously, making it versatile across various applications.
4. **Continuous Learning**: As a language model, Westlake-7B continuously improves its performance through ongoing training on new data sets. This ensures its capabilities remain up-to-date and relevant in an ever-evolving world of communication.
## Usage Guidelines
To utilize Westlake-7Bv2 for your projects or experiments, follow these steps:
1. **Prompting**: Provide clear and concise prompts that outline the desired role-play scenario or text generation task. The quality of output depends heavily on the clarity and relevance of input instructions.
2. **Feedback Loop**: For optimal results, consider incorporating a feedback loop into your application to refine generated outputs based on user preferences or additional contextual information. This iterative process can significantly enhance the model's performance in specific domains.
3. **Ethical Considerations**: As with any AI system, ensure responsible usage of Westlake-7B by avoiding harmful content generation or misuse of its capabilities.
## Potential Applications
Westlake-7Bv2's versatility makes it suitable for various applications across different industries:
1. **Creative Writing**: Assist authors in generating new ideas, expanding storylines, or even completing drafts by providing creative suggestions and textual content.
2. **Education**: Enhance language learning platforms with interactive role-play scenarios to improve students' communication skills and cultural understanding.
3. **Gaming**: Integrate Westlake-7B into game engines for dynamic non-player character interactions or generating unique questlines based on player choices.
4. **Customer Support**: Leverage the model's conversational abilities to create chatbots capable of handling complex queries and providing personalized assistance.
5. **Social Media**: Develop applications that generate engaging content such as captions, status updates, or even entire posts tailored to users' preferences and interests. |
Lewdiculous/SOVL_Llama3_8B-GGUF-IQ-Imatrix | Lewdiculous | "2024-05-04T14:30:30Z" | 892 | 27 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | "2024-04-25T03:46:59Z" | ---
license: apache-2.0
---
> [!TIP]
> My upload speeds have been cooked and unstable lately. <br>
> Realistically I'd need to move to get a better provider. <br>
> If you **want** and you are able to, you can [**support various endeavors here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
> I apologize for disrupting your experience.
# #llama-3 #experimental #work-in-progress
GGUF-IQ-Imatrix quants for @jeiku's [ResplendentAI/SOVL_Llama3_8B](https://huggingface.co/ResplendentAI/SOVL_Llama3_8B). <br> Give them some love!
> [!IMPORTANT]
> **Updated!**
> These quants have been redone with the fixes from [llama.cpp/pull/6920](https://github.com/ggerganov/llama.cpp/pull/6920) in mind. <br>
> Use **KoboldCpp version 1.64** or higher.
> [!NOTE]
> **Well...!** <br>
> Turns out it was not just a hallucination and this model actually is pretty cool so **give it a chance!** <br>
> For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes.
> [!WARNING]
> **Use the provided presets.** <br>
> Compatible SillyTavern presets [here (simple)](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/cope-llama-3-0.1) or [here (Virt's roleplay)](https://huggingface.co/Virt-io/SillyTavern-Presets).
> Use the latest version of KoboldCpp.
 |
mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF | mradermacher | "2024-05-10T16:20:05Z" | 892 | 7 | transformers | [
"transformers",
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"heathcare",
"medical",
"clinical",
"med",
"lifescience",
"Pharmaceutical",
"Pharma",
"en",
"base_model:aaditya/OpenBioLLM-Llama3-70B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-04-27T01:59:56Z" | ---
base_model: aaditya/OpenBioLLM-Llama3-70B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
- heathcare
- medical
- clinical
- med
- lifescience
- Pharmaceutical
- Pharma
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/OpenBioLLM-Llama3-70B-i1-GGUF/resolve/main/OpenBioLLM-Llama3-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
NeverSleep/Llama-3-Lumimaid-70B-v0.1 | NeverSleep | "2024-05-07T11:07:59Z" | 892 | 25 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-28T12:42:49Z" | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png" style="display: block; margin: auto;">
</div></center>
This model uses the Llama3 **prompting format**
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our [Discord Server](https://discord.gg/MtCVRWTZXY).
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of Lumimaid-70B-v0.1.
Switch: [8B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) - [70B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1) - [70B-alt](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt) - [8B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) - [70B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS)
## Training data used:
- [Aesir datasets](https://huggingface.co/MinervaAI)
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly)
- [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly)
- [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly)
- Airoboros (reduced)
- [Capybara](https://huggingface.co/datasets/Undi95/Capybara-ShareGPT/) (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek |
DuckyBlender/racist-phi3-GGUF | DuckyBlender | "2024-05-21T12:18:53Z" | 892 | 0 | null | [
"gguf",
"racist",
"nsfw",
"not-for-all-audiences",
"en",
"dataset:DuckyBlender/racist-inputoutput",
"region:us"
] | null | "2024-05-03T17:10:18Z" | ---
datasets:
- DuckyBlender/racist-inputoutput
language:
- en
tags:
- racist
- nsfw
- not-for-all-audiences
---
## Llamacpp Quantizations of DuckyBlender/racist-phi3
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2783">b2783</a> for quantization.
Original model: https://huggingface.co/DuckyBlender/racist-phi3
|
CHE-72/TAIDE-LX-7B-Chat-Q3_K_M-GGUF | CHE-72 | "2024-06-22T17:44:30Z" | 892 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:taide/TAIDE-LX-7B-Chat",
"license:other",
"region:us"
] | null | "2024-06-22T17:44:15Z" | ---
base_model: taide/TAIDE-LX-7B-Chat
license: other
license_name: taide-l-models-community-license-agreement
license_link: https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view
tags:
- llama-cpp
- gguf-my-repo
extra_gated_heading: ๆจ้่ฆๅ
ๅๆๆๆฌๆขๆฌพๆ่ฝไฝฟ็จๆญคๆจกๅ
extra_gated_fields:
ๅงๅ(Name): text
็ๆฅ(Date of birth): date_picker
ๅๅฎถ(Country): country
ๆๅฑฌๅฎไฝ(Affiliation): text
geo: ip_location
ๆไธ้ๅบ่กจ็คบๆจๅๆ็คพ็พคๆๆฌๅๆๆธ่ๅไบบ่ณๆ่้ๅ็ฅ่ฒๆ(By clicking Submit below I accept the terms of the license and privacy policy): checkbox
extra_gated_prompt: '* ### [TAIDE L ้กๆจกๅ็คพ็พคๆๆฌๅๆๆธ(License)](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view)
* ### [ๅไบบ่ณๆ่้ๅ็ฅ่ฒๆ(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view)'
extra_gated_button_content: ้ๅบ(Submit)
---
# CHE-72/TAIDE-LX-7B-Chat-Q3_K_M-GGUF
This model was converted to GGUF format from [`taide/TAIDE-LX-7B-Chat`](https://huggingface.co/taide/TAIDE-LX-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/taide/TAIDE-LX-7B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q3_K_M-GGUF --hf-file taide-lx-7b-chat-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q3_K_M-GGUF --hf-file taide-lx-7b-chat-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q3_K_M-GGUF --hf-file taide-lx-7b-chat-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q3_K_M-GGUF --hf-file taide-lx-7b-chat-q3_k_m.gguf -c 2048
```
|
microsoft/dit-large | microsoft | "2023-02-27T17:58:01Z" | 891 | 17 | transformers | [
"transformers",
"pytorch",
"beit",
"dit",
"arxiv:2203.02378",
"region:us"
] | null | "2022-03-07T20:09:02Z" | ---
tags:
- dit
inference: false
---
# Document Image Transformer (large-sized model)
Document Image Transformer (DiT) model pre-trained on IIT-CDIP (Lewis et al., 2006), a dataset that includes 42 million document images. It was introduced in the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/dit). Note that DiT is identical to the architecture of [BEiT](https://huggingface.co/docs/transformers/model_doc/beit).
Disclaimer: The team releasing DiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Document Image Transformer (DiT) is a transformer encoder model (BERT-like) pre-trained on a large collection of images in a self-supervised fashion. The pre-training objective for the model is to predict visual tokens from the encoder of a discrete VAE (dVAE), based on masked patches.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled document images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for encoding document images into a vector space, but it's mostly meant to be fine-tuned on tasks like document image classification, table detection or document layout analysis. See the [model hub](https://huggingface.co/models?search=microsoft/dit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import BeitImageProcessor, BeitForMaskedImageModeling
import torch
from PIL import Image
image = Image.open('path_to_your_document_image').convert('RGB')
processor = BeitImageProcessor.from_pretrained("microsoft/dit-large")
model = BeitForMaskedImageModeling.from_pretrained("microsoft/dit-large")
num_patches = (model.config.image_size // model.config.patch_size) ** 2
pixel_values = processor(images=image, return_tensors="pt").pixel_values
# create random boolean mask of shape (batch_size, num_patches)
bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool()
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss, logits = outputs.loss, outputs.logits
```
### BibTeX entry and citation info
```bibtex
@article{Lewis2006BuildingAT,
title={Building a test collection for complex document information processing},
author={David D. Lewis and Gady Agam and Shlomo Engelson Argamon and Ophir Frieder and David A. Grossman and Jefferson Heard},
journal={Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval},
year={2006}
}
``` |
MilaNLProc/xlm-emo-t | MilaNLProc | "2023-03-27T17:52:36Z" | 891 | 4 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"emotion",
"emotion-analysis",
"multilingual",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-04-06T08:56:26Z" | ---
language: multilingual
tags:
- emotion
- emotion-analysis
- multilingual
widget:
- text: "Guarda! ci sono dei bellissimi capibara!"
example_title: "Emotion Classification 1"
- text: "Sei una testa di cazzo!!"
example_title: "Emotion Classification 2"
- text: "Quelle bonne nouvelle!"
example_title: "Emotion Classification 3"
arxiv: ""
---
#
[Federico Bianchi](https://federicobianchi.io/) โข
[Debora Nozza](http://dnozza.github.io/) โข
[Dirk Hovy](http://www.dirkhovy.com/)
## Abstract
Detecting emotion in text allows social and computational scientists to study how people behave and react to online events. However, developing these tools for different languages requires data that is not always available. This paper collects the available emotion detection datasets across 19 languages. We train a multilingual emotion prediction model for social media data, XLM-EMO. The model shows competitive performance in a zero-shot setting, suggesting it is helpful in the context of low-resource languages. We release our model to the community so that interested researchers can directly use it.
## Model
This model is the fine-tuned version of the [XLM-T](https://aclanthology.org/2022.lrec-1.27/) model.
### Intended Use
The model is intended as a research output for research communities.
#### Primary intended uses
The primary intended users of these models are AI researchers.
## Results
This model had an F1 of 0.85 on the test set.
## License
For models, restrictions may apply to the data (which are derived from existing datasets) or Twitter (main data source).
We refer users to the original licenses accompanying each dataset and Twitter regulations.
THE SOFTWARE IS PROVIDED โAS ISโ, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
## Citation
Please use the following BibTeX entry if you use this model in your project:
```
@inproceedings{bianchi2021feel,
title = "{XLM-EMO: Multilingual Emotion Prediction in Social Media Text}",
author = "Bianchi, Federico and Nozza, Debora and Hovy, Dirk",
booktitle = "Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
year = "2022",
publisher = "Association for Computational Linguistics",
}
``` |
DGSpitzer/Cyberpunk-Anime-Diffusion | DGSpitzer | "2023-06-21T20:44:20Z" | 891 | 538 | diffusers | [
"diffusers",
"safetensors",
"cyberpunk",
"anime",
"waifu-diffusion",
"stable-diffusion",
"aiart",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-10-27T17:02:49Z" | ---
language:
- en
thumbnail: "https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/thumbnail.png"
tags:
- cyberpunk
- anime
- waifu-diffusion
- stable-diffusion
- aiart
- text-to-image
license: creativeml-openrail-m
---
<center><img src="https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/5.jpg" width="512" height="512"/></center>

# Cyberpunk Anime Diffusion
An AI model that generates cyberpunk anime characters!~
Based of a finetuned Waifu Diffusion V1.3 Model with Stable Diffusion V1.5 New Vae, training in Dreambooth
by [DGSpitzer](https://www.youtube.com/channel/UCzzsYBF4qwtMwJaPJZ5SuPg)
### ๐งจ Diffusers
This repo contains both .ckpt and Diffuser model files. It's compatible to be used as any Stable Diffusion model, using standard [Stable Diffusion Pipelines](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can convert this model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](https://huggingface.co/blog/stable_diffusion_jax).
```python example for loading the Diffuser
#!pip install diffusers transformers scipy torch
from diffusers import StableDiffusionPipeline
import torch
model_id = "DGSpitzer/Cyberpunk-Anime-Diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a beautiful perfect face girl in dgs illustration style, Anime fine details portrait of school girl in front of modern tokyo city landscape on the background deep bokeh, anime masterpiece, 8k, sharp high quality anime"
image = pipe(prompt).images[0]
image.save("./cyberpunk_girl.png")
```
# Online Demo
You can try the Online Web UI demo build with [Gradio](https://github.com/gradio-app/gradio), or use Colab Notebook at here:
*My Online Space Demo*
[](https://huggingface.co/spaces/DGSpitzer/DGS-Diffusion-Space)
*Finetuned Diffusion WebUI Demo by anzorq*
[](https://huggingface.co/spaces/anzorq/finetuned_diffusion)
*Colab Notebook*
[](https://colab.research.google.com/github/HelixNGC7293/cyberpunk-anime-diffusion/blob/main/cyberpunk_anime_diffusion.ipynb)[](https://github.com/HelixNGC7293/cyberpunk-anime-diffusion)
*Buy me a coffee if you like this project ;P โฅ*
[](https://www.buymeacoffee.com/dgspitzer)
<center><img src="https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/1.jpg" width="512" height="512"/></center>
# **๐Model๐**
AI Model Weights available at huggingface: https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion
<center><img src="https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/2.jpg" width="512" height="512"/></center>
# Usage
After model loaded, use keyword **dgs** in your prompt, with **illustration style** to get even better results.
For sampler, use **Euler A** for the best result (**DDIM** kinda works too), CFG Scale 7, steps 20 should be fine
**Example 1:**
```
portrait of a girl in dgs illustration style, Anime girl, female soldier working in a cyberpunk city, cleavage, ((perfect femine face)), intricate, 8k, highly detailed, shy, digital painting, intense, sharp focus
```
For cyber robot male character, you can add **muscular male** to improve the output.
**Example 2:**
```
a photo of muscular beard soldier male in dgs illustration style, half-body, holding robot arms, strong chest
```
**Example 3 (with Stable Diffusion WebUI):**
If using [AUTOMATIC1111's Stable Diffusion WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
You can simply use this as **prompt** with **Euler A** Sampler, CFG Scale 7, steps 20, 704 x 704px output res:
```
an anime girl in dgs illustration style
```
And set the **negative prompt** as this to get cleaner face:
```
out of focus, scary, creepy, evil, disfigured, missing limbs, ugly, gross, missing fingers
```
This will give you the exactly same style as the sample images above.
<center><img src="https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/ReadmeAddon.jpg" width="256" height="353"/></center>
---
**NOTE: usage of this model implies accpetance of stable diffusion's [CreativeML Open RAIL-M license](LICENSE)**
---
<center><img src="https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/4.jpg" width="700" height="700"/></center>
<center><img src="https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/img/6.jpg" width="700" height="700"/></center>
|
sdadas/mmlw-retrieval-roberta-large | sdadas | "2024-04-23T15:27:32Z" | 891 | 7 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"information-retrieval",
"pl",
"arxiv:2402.13350",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-10-18T19:09:42Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- information-retrieval
language: pl
license: apache-2.0
widget:
- source_sentence: "zapytanie: Jak doลผyฤ 100 lat?"
sentences:
- "Trzeba zdrowo siฤ odลผywiaฤ i uprawiaฤ sport."
- "Trzeba piฤ alkohol, imprezowaฤ i jeลบdziฤ szybkimi autami."
- "Gdy trwaลa kampania politycy zapewniali, ลผe rozprawiฤ
siฤ z zakazem niedzielnego handlu."
---
<h1 align="center">MMLW-retrieval-roberta-large</h1>
MMLW (muszฤ mieฤ lepszฤ
wiadomoลฤ) are neural text encoders for Polish.
This model is optimized for information retrieval tasks. It can transform queries and passages to 1024 dimensional vectors.
The model was developed using a two-step procedure:
- In the first step, it was initialized with Polish RoBERTa checkpoint, and then trained with [multilingual knowledge distillation method](https://aclanthology.org/2020.emnlp-main.365/) on a diverse corpus of 60 million Polish-English text pairs. We utilised [English FlagEmbeddings (BGE)](https://huggingface.co/BAAI/bge-large-en) as teacher models for distillation.
- The second step involved fine-tuning the obtained models with contrastrive loss on [Polish MS MARCO](https://huggingface.co/datasets/clarin-knext/msmarco-pl) training split. In order to improve the efficiency of contrastive training, we used large batch sizes - 1152 for small, 768 for base, and 288 for large models. Fine-tuning was conducted on a cluster of 12 A100 GPUs.
โ ๏ธ **2023-12-26:** We have updated the model to a new version with improved results. You can still download the previous version using the **v1** tag: `AutoModel.from_pretrained("sdadas/mmlw-retrieval-roberta-large", revision="v1")` โ ๏ธ
## Usage (Sentence-Transformers)
โ ๏ธ Our dense retrievers require the use of specific prefixes and suffixes when encoding texts. For this model, each query should be preceded by the prefix **"zapytanie: "** โ ๏ธ
You can use the model like this with [sentence-transformers](https://www.SBERT.net):
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
query_prefix = "zapytanie: "
answer_prefix = ""
queries = [query_prefix + "Jak doลผyฤ 100 lat?"]
answers = [
answer_prefix + "Trzeba zdrowo siฤ odลผywiaฤ i uprawiaฤ sport.",
answer_prefix + "Trzeba piฤ alkohol, imprezowaฤ i jeลบdziฤ szybkimi autami.",
answer_prefix + "Gdy trwaลa kampania politycy zapewniali, ลผe rozprawiฤ
siฤ z zakazem niedzielnego handlu."
]
model = SentenceTransformer("sdadas/mmlw-retrieval-roberta-large")
queries_emb = model.encode(queries, convert_to_tensor=True, show_progress_bar=False)
answers_emb = model.encode(answers, convert_to_tensor=True, show_progress_bar=False)
best_answer = cos_sim(queries_emb, answers_emb).argmax().item()
print(answers[best_answer])
# Trzeba zdrowo siฤ odลผywiaฤ i uprawiaฤ sport.
```
## Evaluation Results
The model achieves **NDCG@10** of **58.46** on the Polish Information Retrieval Benchmark. See [PIRB Leaderboard](https://huggingface.co/spaces/sdadas/pirb) for detailed results.
## Acknowledgements
This model was trained with the A100 GPU cluster support delivered by the Gdansk University of Technology within the TASK center initiative.
## Citation
```bibtex
@article{dadas2024pirb,
title={{PIRB}: A Comprehensive Benchmark of Polish Dense and Hybrid Text Retrieval Methods},
author={Sลawomir Dadas and Michaล Pereลkiewicz and Rafaล Poลwiata},
year={2024},
eprint={2402.13350},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ibm/testing-patchtst_etth1_pretrain | ibm | "2023-11-21T19:40:20Z" | 891 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"patchtst",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2023-11-21T19:37:24Z" | ---
tags:
- generated_from_trainer
model-index:
- name: patchtst_etth1_pretrain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# patchtst_etth1_pretrain
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.14.1
|
TheBloke/LLaMA2-13B-Estopia-GGUF | TheBloke | "2024-01-15T22:59:09Z" | 891 | 8 | transformers | [
"transformers",
"gguf",
"llama",
"mergekit",
"merge",
"base_model:KoboldAI/LLaMA2-13B-Estopia",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
] | null | "2024-01-15T22:39:05Z" | ---
base_model: KoboldAI/LLaMA2-13B-Estopia
inference: false
license: cc-by-nc-4.0
model_creator: KoboldAI
model_name: Llama2 13B Estopia
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- mergekit
- merge
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama2 13B Estopia - GGUF
- Model creator: [KoboldAI](https://huggingface.co/KoboldAI)
- Original model: [Llama2 13B Estopia](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia)
<!-- description start -->
## Description
This repo contains GGUF format model files for [KoboldAI's Llama2 13B Estopia](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF)
* [KoboldAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [KoboldAI's Llama2 13B Estopia](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama2-13b-estopia.Q2_K.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF/blob/main/llama2-13b-estopia.Q2_K.gguf) | Q2_K | 2 | 4.85 GB| 7.35 GB | significant quality loss - not recommended for most purposes |
| [llama2-13b-estopia.Q3_K_S.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF/blob/main/llama2-13b-estopia.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [llama2-13b-estopia.Q3_K_M.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF/blob/main/llama2-13b-estopia.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [llama2-13b-estopia.Q3_K_L.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF/blob/main/llama2-13b-estopia.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [llama2-13b-estopia.Q4_0.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF/blob/main/llama2-13b-estopia.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama2-13b-estopia.Q4_K_S.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF/blob/main/llama2-13b-estopia.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss |
| [llama2-13b-estopia.Q4_K_M.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF/blob/main/llama2-13b-estopia.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [llama2-13b-estopia.Q5_0.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF/blob/main/llama2-13b-estopia.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama2-13b-estopia.Q5_K_S.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF/blob/main/llama2-13b-estopia.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [llama2-13b-estopia.Q5_K_M.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF/blob/main/llama2-13b-estopia.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [llama2-13b-estopia.Q6_K.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF/blob/main/llama2-13b-estopia.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [llama2-13b-estopia.Q8_0.gguf](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF/blob/main/llama2-13b-estopia.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/LLaMA2-13B-Estopia-GGUF and below it, a specific filename to download, such as: llama2-13b-estopia.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/LLaMA2-13B-Estopia-GGUF llama2-13b-estopia.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/LLaMA2-13B-Estopia-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA2-13B-Estopia-GGUF llama2-13b-estopia.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m llama2-13b-estopia.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./llama2-13b-estopia.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./llama2-13b-estopia.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, ้ฟๆ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjรคreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: KoboldAI's Llama2 13B Estopia
# Introduction
- Estopia is a model focused on improving the dialogue and prose returned when using the instruct format. As a side benefit, character cards and similar seem to have also improved, remembering details well in many cases.
- It focuses on "guided narratives" - using instructions to guide or explore fictional stories, where you act as a guide for the AI to narrate and fill in the details.
- It has primarily been tested around prose, using instructions to guide narrative, detail retention and "neutrality" - in particular with regards to plot armour. Unless you define different rules for your adventure / narrative with instructions, it should be realistic in the responses provided.
- It has been tested using different modes, such as instruct, chat, adventure and story modes - and should be able to do them all to a degree, with it's strengths being instruct and adventure, with story being a close second.
# Usage
- The Estopia model has been tested primarily using the Alpaca format, but with the range of models included likely has some understanding of others. Some examples of tested formats are below:
- ```\n### Instruction:\nWhat colour is the sky?\n### Response:\nThe sky is...```
- ```<Story text>\n***\nWrite a summary of the text above\n***\nThe story starts by...```
- Using the Kobold Lite AI adventure mode
- ```User:Hello there!\nAssistant:Good morning...\n```
- For settings, the following are recommended for general use:
- Temperature: 0.8-1.2
- Min P: 0.05-0.1
- Max P: 0.92, or 1 if using a Min P greater than 0
- Top K: 0
- Response length: Higher than your usual amount most likely - for example a common value selected is 512.
- Note: Response lengths are not guaranteed to always be this length. On occasion, responses may be shorter if they convey the response entirely, other times they could be upwards of this value. It depends mostly on the character card, instructions, etc.
- Rep Pen: 1.1
- Rep Pen Range: 2 or 3x your response length
- Stopping tokens (Not needed, but can help if the AI is writing too much):
- ```##||$||---||$||ASSISTANT:||$||[End||$||</s>``` - A single string for Kobold Lite combining the ones below
- ```##```
- ```---```
- ```ASSISTANT:```
- ```[End```
- ```</s>```
- The settings above should provide a generally good experience balancing instruction following and creativity. Generally the higher you set the temperature, the greater the creativity and higher chance of logical errors when providing responses from the AI.
# Recipe
This model was made in three stages, along with many experimental stages which will be skipped for brevity. The first was internally referred to as EstopiaV9, which has a high degree of instruction following and creativity in responses, though they were generally shorter and a little more restricted in the scope of outputs, but conveyed nuance better.
```yaml
merge_method: task_arithmetic
base_model: TheBloke/Llama-2-13B-fp16
models:
- model: TheBloke/Llama-2-13B-fp16
- model: Undi95/UtopiaXL-13B
parameters:
weight: 1.0
- model: Doctor-Shotgun/cat-v1.0-13b
parameters:
weight: 0.02
- model: PygmalionAI/mythalion-13b
parameters:
weight: 0.10
- model: Undi95/Emerhyst-13B
parameters:
weight: 0.05
- model: CalderaAI/13B-Thorns-l2
parameters:
weight: 0.05
- model: KoboldAI/LLaMA2-13B-Tiefighter
parameters:
weight: 0.20
dtype: float16
```
The second part of the merge was known as EstopiaV13. This produced responses which were long, but tended to write beyond good stopping points for further instructions to be added as it leant heavily on novel style prose. It did however benefit from a greater degree of neutrality as described above, and retained many of the detail tracking abilities of V9.
```yaml
merge_method: task_arithmetic
base_model: TheBloke/Llama-2-13B-fp16
models:
- model: TheBloke/Llama-2-13B-fp16
- model: Undi95/UtopiaXL-13B
parameters:
weight: 1.0
- model: Doctor-Shotgun/cat-v1.0-13b
parameters:
weight: 0.01
- model: chargoddard/rpguild-chatml-13b
parameters:
weight: 0.02
- model: PygmalionAI/mythalion-13b
parameters:
weight: 0.08
- model: CalderaAI/13B-Thorns-l2
parameters:
weight: 0.02
- model: KoboldAI/LLaMA2-13B-Tiefighter
parameters:
weight: 0.20
dtype: float16
```
The third step was a merge between the two to retain the benefits of both as much as possible. This was performed using the dare merging technique.
```yaml
# task-arithmetic style
models:
- model: EstopiaV9
parameters:
weight: 1
density: 1
- model: EstopiaV13
parameters:
weight: 0.05
density: 0.30
merge_method: dare_ties
base_model: TheBloke/Llama-2-13B-fp16
parameters:
int8_mask: true
dtype: bfloat16
```
# Model selection
- Undi95/UtopiaXL-13B
- Solid all around base for models, with the ability to write longer responses and generally good retension to detail.
- Doctor-Shotgun/cat-v1.0-13b
- A medical focused model which is added to focus a little more on the human responses, such as for psycology.
- PygmalionAI/mythalion-13b
- A roleplay and instruct focused model, which improves attentiveness to character card details and the variety of responses
- Undi95/Emerhyst-13B
- A roleplay but also longer form response model. It can be quite variable, but helps add to the depth and possible options the AI can respond with during narratives.
- CalderaAI/13B-Thorns-l2
- A neutral and very attentive model. It is good at chat and following instructions, which help benefit these modes.
- KoboldAI/LLaMA2-13B-Tiefighter
- A solid all around model, focusing on story writing and adventure modes. It provides all around benefits to creativity and the prose in models, along with adventure mode support.
- chargoddard/rpguild-chatml-13b
- A roleplay model, which introduces new data and also improves the detail retention in longer narratives.
# Notes
- With the differing models inside, this model will not have perfect end of sequence tokens which is a problem many merges can share. While attempts have been made to minimise this, you may occasionally get oddly behaving tokens - this should be possible to resolve with a quick manual edit once and the model should pick up on it.
- Chat is one of the least tested areas for this model. It works fairly well, but it can be quite character card dependant.
- This is a narrative and prose focused model. As a result, it can and will talk for you if guided to do so (such as asking it to act as a co-author or narrator) within instructions or other contexts. This can be mitigated mostly by adding instructions to limit this, or using chat mode instead.
# Future areas
- Llava
- Some success has been had with merging the llava lora on this. While no in depth testing has been performed, more narrative responses based on the images could be obtained - though there were drawbacks in the form of degraded performance in other areas, and hallucinations due to the fictional focus of this model.
- Stheno
- A merge which has similar promise from Sao. Some merge attempts have been made between the two and were promising, but not entirely consistent at the moment. With some possible refinement, this could produce an even stronger model.
- DynamicFactor
- All the merges used have been based on llama two in this merge, but a dare merge with dynamic factor (an attempted refinement of llama two) showed a beneficial improvement to the instruction abilities of the model, along with lengthy responses. It lost a little of the variety of responses, so perhaps if a balance of it could be added the instruction abilities and reasoning could be improved even further.
<!-- original-model-card end -->
|
Crataco/Nous-Hermes-2-Mistral-7B-DPO-imatrix-GGUF | Crataco | "2024-04-10T09:54:03Z" | 891 | 4 | null | [
"gguf",
"Mistral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-03-06T06:57:04Z" | ---
base_model: mistralai/Mistral-7B-v0.1
tags:
- Mistral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
model-index:
- name: Nous-Hermes-2-Mistral-7B-DPO
results: []
license: apache-2.0
language:
- en
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.
---
This is [Nous Hermes 2 Mistral 7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO), quantized with the help of imatrix so it could offer better performance for being quantized, and have quantization levels available for lower-memory devices to run. [Kalomaze's "groups_merged.txt"](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) was used for the importance matrix, with context set to 8,192.
Here's a chart that provides an approximation of the HellaSwag score (out of 1,000 tasks) and the RAM usage (with `--no-mmap`) with llama.cpp. The chart is incomplete, and thanks to the randomization of tasks, it may be slightly unprecise:
|Quantization|HellaSwag|256 ctx RAM|512 ctx|1024 ctx|2048 ctx|4096 ctx|8192 ctx
|--------|--------|--------|--------|--------|--------|--------|--------|
|IQ1_S |51.7% |1.6 GiB |1.6 GiB |1.7 GiB |1.8 GiB |2.0 GiB |2.5 GiB |
|IQ1_M |60.5% |
|IQ2_XXS |72.5% |1.9 GiB |1.9 GiB |2.0 GiB |2.1 GiB |2.4 GiB |2.9 GiB |
|IQ2_XS |74.2% |2.1 GiB |2.1 GiB |2.2 GiB |2.3 GiB |2.6 GiB |3.1 GiB |
|IQ2_S |76.8% |2.2 GiB |2.2 GiB |2.3 GiB |2.4 GiB |2.7 GiB |3.2 GiB |
|Q2_K (original)|77.4%|2.6 GiB|2.6 GiB|2.7 GiB|2.8 GiB|3.1 GiB |3.6 GiB |
|Q2_K |78.7% |
|IQ3_XXS |79.7% |
|IQ3_XS |80.6% |
|IQ3_S |81.2% |
|IQ3_M |81.1% |
|IQ4_XS |82.0% |
|IQ4_NL |82.0% |
|Q3_K_M (original)|80.0%|3.3 GiB|3.4 GiB|3.4 GiB|3.6 GiB|3.8 GiB|4.3 GiB|
|Q3_K_M |80.9%
|Q4_K_M (original)|81.8%|4.1 GiB|4.2 GiB|4.2 GiB|4.3 GiB|4.6 GiB|5.1 GiB|
|Q4_K_M |81.9%
|Q5_K_M (original)|82.1%|4.8 GiB|4.9 GiB|4.9 GiB|5.1 GiB|5.3 GiB|5.8 GiB|
|Q5_K_M |81.5% |
|Q6_K |81.7% |5.6 GiB |5.6 GiB |5.7 GiB |5.8 GiB |6.1 GiB |6.6 GiB |
I don't recommend using iq1_S. You may be better off using TinyDolphin-1.1B (HellaSwag: 59.0%) and Dolphin 2.6 Phi-2 (HellaSwag: 71.6%) if you're that limited.
The original GGUFs can be found at [NousResearch/Nous-Hermes-2-Mistral-7B-DPO-GGUF](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO-GGUF). Original model card below.
***
# Nous Hermes 2 - Mistral 7B - DPO

## Model Description
Nous Hermes 2 on Mistral 7B DPO is the new flagship 7B Hermes! This model was DPO'd from [Teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and has improved across the board on all benchmarks tested - AGIEval, BigBench Reasoning, GPT4All, and TruthfulQA.
The model prior to DPO was trained on 1,000,000 instructions/chats of GPT-4 quality or better, primarily synthetic data as well as other high quality datasets, available from the repository [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5).
## Thank you to FluidStack for sponsoring compute for this model!
## Example Outputs
### Describing Weather Patterns in Paris:

### Making JSON Nested Lists

### Roleplaying as a Toaist Master

## Benchmark Results
Nous-Hermes 2 DPO on Mistral 7B is an improvement across the board on the benchmarks below compared to the original OpenHermes 2.5 model, as shown here:

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5776|ยฑ |0.0144|
| | |acc_norm|0.6220|ยฑ |0.0142|
|arc_easy | 0|acc |0.8380|ยฑ |0.0076|
| | |acc_norm|0.8245|ยฑ |0.0078|
|boolq | 1|acc |0.8624|ยฑ |0.0060|
|hellaswag | 0|acc |0.6418|ยฑ |0.0048|
| | |acc_norm|0.8249|ยฑ |0.0038|
|openbookqa | 0|acc |0.3420|ยฑ |0.0212|
| | |acc_norm|0.4540|ยฑ |0.0223|
|piqa | 0|acc |0.8177|ยฑ |0.0090|
| | |acc_norm|0.8264|ยฑ |0.0088|
|winogrande | 0|acc |0.7466|ยฑ |0.0122|
```
Average: 73.72
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2047|ยฑ |0.0254|
| | |acc_norm|0.2283|ยฑ |0.0264|
|agieval_logiqa_en | 0|acc |0.3779|ยฑ |0.0190|
| | |acc_norm|0.3932|ยฑ |0.0192|
|agieval_lsat_ar | 0|acc |0.2652|ยฑ |0.0292|
| | |acc_norm|0.2522|ยฑ |0.0287|
|agieval_lsat_lr | 0|acc |0.5216|ยฑ |0.0221|
| | |acc_norm|0.5137|ยฑ |0.0222|
|agieval_lsat_rc | 0|acc |0.5911|ยฑ |0.0300|
| | |acc_norm|0.5836|ยฑ |0.0301|
|agieval_sat_en | 0|acc |0.7427|ยฑ |0.0305|
| | |acc_norm|0.7184|ยฑ |0.0314|
|agieval_sat_en_without_passage| 0|acc |0.4612|ยฑ |0.0348|
| | |acc_norm|0.4466|ยฑ |0.0347|
|agieval_sat_math | 0|acc |0.3818|ยฑ |0.0328|
| | |acc_norm|0.3545|ยฑ |0.0323|
```
Average: 43.63
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|ยฑ |0.0361|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6694|ยฑ |0.0245|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|ยฑ |0.0294|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|ยฑ |0.0214|
| | |exact_str_match |0.2256|ยฑ |0.0221|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|ยฑ |0.0207|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|ยฑ |0.0154|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|ยฑ |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|ยฑ |0.0215|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|ยฑ |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|ยฑ |0.0105|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4420|ยฑ |0.0235|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|ยฑ |0.0142|
|bigbench_snarks | 0|multiple_choice_grade|0.6630|ยฑ |0.0352|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|ยฑ |0.0150|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|ยฑ |0.0147|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|ยฑ |0.0116|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|ยฑ |0.0091|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|ยฑ |0.0289|
```
Average: 41.94
## TruthfulQA:
```
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.3892|ยฑ |0.0171|
| | |mc2 |0.5642|ยฑ |0.0153|
```
# Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MixtralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mistral-7B-DPO', trust_remote_code=True)
model = MistralForCausalLM.from_pretrained(
"NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
# How to cite:
```bibtext
@misc{Nous-Hermes-2-Mistral-7B-DPO,
url={[https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)},
title={Nous Hermes 2 Mistral 7B DPO},
author={"Teknium", "theemozilla", "karan4d", "huemin_art"}
}
```
|
second-state/Nomic-embed-text-v1.5-Embedding-GGUF | second-state | "2024-04-17T07:55:55Z" | 891 | 1 | sentence-transformers | [
"sentence-transformers",
"gguf",
"sentence-similarity",
"en",
"base_model:nomic-ai/nomic-embed-text-v1.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-04-17T07:20:09Z" | ---
base_model: nomic-ai/nomic-embed-text-v1.5
library_name: sentence-transformers
pipeline_tag: sentence-similarity
license: apache-2.0
model_creator: nomic-ai
quantized_by: Second State Inc.
language: en
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nomic-embed-text-v1.5-Embedding-GGUF
## Original Model
[nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5)
## Run with LlamaEdge
- LlamaEdge version: [v0.8.2](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.8.2) and above
- Context size: `768`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:nomic-embed-text-v1.5-f16.gguf \
llama-api-server.wasm \
--prompt-template llama-2-chat \
--ctx-size 768 \
--model-name nomic-embed-text-v1.5
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [nomic-embed-text-v1.5-Q2_K.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q2_K.gguf) | Q2_K | 2 |60.9 MB| smallest, significant quality loss - not recommended for most purposes |
| [nomic-embed-text-v1.5-Q3_K_L.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q3_K_L.gguf) | Q3_K_L | 3 | 80.7 MB| small, substantial quality loss |
| [nomic-embed-text-v1.5-Q3_K_M.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q3_K_M.gguf) | Q3_K_M | 3 | 76.3 MB| very small, high quality loss |
| [nomic-embed-text-v1.5-Q3_K_S.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q3_K_S.gguf) | Q3_K_S | 3 | 68.8 MB| very small, high quality loss |
| [nomic-embed-text-v1.5-Q4_0.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q4_0.gguf) | Q4_0 | 4 | 84.8 MB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [nomic-embed-text-v1.5-Q4_K_M.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q4_K_M.gguf) | Q4_K_M | 4 | 90.2 MB| medium, balanced quality - recommended |
| [nomic-embed-text-v1.5-Q4_K_S.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q4_K_S.gguf) | Q4_K_S | 4 | 84.1 MB| small, greater quality loss |
| [nomic-embed-text-v1.5-Q5_0.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q5_0.gguf) | Q5_0 | 5 | 98 MB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [nomic-embed-text-v1.5-Q5_K_M.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q5_K_M.gguf) | Q5_K_M | 5 | 103 MB| large, very low quality loss - recommended |
| [nomic-embed-text-v1.5-Q5_K_S.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q5_K_S.gguf) | Q5_K_S | 5 | 98 MB| large, low quality loss - recommended |
| [nomic-embed-text-v1.5-Q6_K.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q6_K.gguf) | Q6_K | 6 | 113 MB| very large, extremely low quality loss |
| [nomic-embed-text-v1.5-Q8_0.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q8_0.gguf) | Q8_0 | 8 | 146 MB| very large, extremely low quality loss - not recommended |
| [nomic-embed-text-v1.5-f16.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-f16.gguf) | Q8_0 | 8 | 274 MB| very large, extremely low quality loss - not recommended |
*Quantized with llama.cpp b2636*
|
naver/DUSt3R_ViTLarge_BaseDecoder_224_linear | naver | "2024-06-21T07:58:52Z" | 891 | 0 | dust3r | [
"dust3r",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"image-to-3d",
"region:us"
] | image-to-3d | "2024-06-19T08:21:44Z" | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
- image-to-3d
library_name: dust3r
repo_url: https://github.com/naver/dust3r
---
## DUSt3R: Geometric 3D Vision Made Easy
```bibtex
@inproceedings{dust3r_cvpr24,
title={DUSt3R: Geometric 3D Vision Made Easy},
author={Shuzhe Wang and Vincent Leroy and Yohann Cabon and Boris Chidlovskii and Jerome Revaud},
booktitle = {CVPR},
year = {2024}
}
```
# License
The code is distributed under the CC BY-NC-SA 4.0 License. See [LICENSE](https://github.com/naver/dust3r/blob/main/LICENSE) for more information.
For the checkpoints, make sure to agree to the license of all the public training datasets and base checkpoints we used, in addition to CC-BY-NC-SA 4.0. See [section: Our Hyperparameters](https://github.com/naver/dust3r?tab=readme-ov-file#our-hyperparameters) for details.
# Model info
Gihub page: https://github.com/naver/dust3r/
Project page: https://dust3r.europe.naverlabs.com/
| Modelname | Training resolutions | Head | Encoder | Decoder |
|-------------|----------------------|------|---------|---------|
| DUSt3R_ViTLarge_BaseDecoder_224_linear | 224x224 | Linear | ViT-L | ViT-B |
# How to use
First, [install dust3r](https://github.com/naver/dust3r?tab=readme-ov-file#installation).
To load the model:
```python
from dust3r.model import AsymmetricCroCo3DStereo
import torch
model = AsymmetricCroCo3DStereo.from_pretrained("naver/DUSt3R_ViTLarge_BaseDecoder_224_linear")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
``` |
model-attribution-challenge/distilgpt2 | model-attribution-challenge | "2022-09-05T17:45:18Z" | 890 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"coreml",
"gpt2",
"text-generation",
"exbert",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:2201.08542",
"arxiv:2203.12574",
"arxiv:1910.09700",
"arxiv:1503.02531",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-07-26T13:34:09Z" | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- openwebtext
model-index:
- name: distilgpt2
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: wikitext
name: WikiText-103
metrics:
- type: perplexity
name: Perplexity
value: 21.1
co2_eq_emissions: 149200
---
# DistilGPT2
DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of [GPT-2](https://huggingface.co/gpt2).
## Model Details
- **Developed by:** Hugging Face
- **Model type:** Transformer-based Language Model
- **Language:** English
- **License:** Apache 2.0
- **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2.
- **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/).
## Uses, Limitations and Risks
#### Limitations and Risks
<details>
<summary>Click to expand</summary>
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
As the developers of GPT-2 (OpenAI) note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md), โlanguage models like GPT-2 reflect the biases inherent to the systems they were trained on.โ Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
The impact of model compression techniques โ such as knowledge distillation โ on bias and fairness issues associated with language models is an active area of research. For example:
- [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
- [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
- [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(48)
>>> generator("The White man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': "The White man worked as a salesman at a McDonald's restaurant called Kia at the time of the"},
{'generated_text': 'The White man worked as a contractor in the Army in the late 1990s. He became a "'},
{'generated_text': 'The White man worked as a police spokesman to the US Navy in the 1930s.'}]
>>> set_seed(48)
>>> generator("The Black man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': 'The Black man worked as a shop assistant for an hour at Wal-Mart at Wal-Mart in'},
{'generated_text': 'The Black man worked as a waiter in the hotel when he was assaulted when he got out of a'},
{'generated_text': 'The Black man worked as a police spokesman four months ago...'}]
```
</details>
#### Potential Uses
Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
> - *Entertainment: Creation of games, chat bots, and amusing generations.*
Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
#### Out-of-scope Uses
OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we donโt support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
### How to Get Started with the Model
<details>
<summary>Click to expand</summary>
*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(42)
>>> generator("Hello, Iโm a language model", max_length=20, num_return_sequences=5)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
[{'generated_text': "Hello, I'm a language model, I'm a language model. In my previous post I've"},
{'generated_text': "Hello, I'm a language model, and I'd love to hear what you think about it."},
{'generated_text': "Hello, I'm a language model, but I don't get much of a connection anymore, so"},
{'generated_text': "Hello, I'm a language model, a functional language... It's not an example, and that"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = GPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
And in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = TFGPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
</details>
## Training Data
DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAIโs WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText.
## Training Procedure
The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108).
## Evaluation Results
The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
## Environmental Impact
*Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- **Hardware Type:** 8 16GB V100
- **Hours used:** 168 (1 week)
- **Cloud Provider:** Azure
- **Compute Region:** unavailable, assumed East US for calculations
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
## Citation
```bibtex
@inproceedings{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
booktitle={NeurIPS EMC^2 Workshop},
year={2019}
}
```
## Glossary
- <a name="knowledge-distillation">**Knowledge Distillation**</a>: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), โknowledge distillation is a compression technique in which a compact model โ the student โ is trained to reproduce the behavior of a larger model โ the teacher โ or an ensemble of models.โ Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531).
<a href="https://huggingface.co/exbert/?model=distilgpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
maddes8cht/h2oai-h2ogpt-gm-oasst1-en-2048-falcon-40b-v2-gguf | maddes8cht | "2023-11-19T21:34:31Z" | 890 | 1 | transformers | [
"transformers",
"gguf",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"region:us"
] | null | "2023-09-28T20:02:43Z" | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# h2ogpt-gm-oasst1-en-2048-falcon-40b-v2 - GGUF
- Model creator: [h2oai](https://huggingface.co/h2oai)
- Original model: [h2ogpt-gm-oasst1-en-2048-falcon-40b-v2](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-40b)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) personalized
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install bitsandbytes==0.39.0
pip install accelerate==0.19.0
pip install torch==2.0.0
pip install einops==0.6.1
```
```python
import torch
from transformers import pipeline, BitsAndBytesConfig, AutoTokenizer
model_kwargs = {}
quantization_config = None
# optional quantization
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
)
model_kwargs["quantization_config"] = quantization_config
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
tokenizer=tokenizer,
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
model_kwargs=model_kwargs,
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = None
# optional quantization
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
)
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
trust_remote_code=True,
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
quantization_config=quantization_config
).eval()
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
quantization_config = None
# optional quantization
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0,
)
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2",
trust_remote_code=True,
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
quantization_config=quantization_config
).eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
RWForCausalLM(
(transformer): RWModel(
(word_embeddings): Embedding(65024, 8192)
(h): ModuleList(
(0-59): 60 x DecoderLayer(
(ln_attn): LayerNorm((8192,), eps=1e-05, elementwise_affine=True)
(ln_mlp): LayerNorm((8192,), eps=1e-05, elementwise_affine=True)
(self_attention): Attention(
(maybe_rotary): RotaryEmbedding()
(query_key_value): Linear(in_features=8192, out_features=9216, bias=False)
(dense): Linear(in_features=8192, out_features=8192, bias=False)
(attention_dropout): Dropout(p=0.0, inplace=False)
)
(mlp): MLP(
(dense_h_to_4h): Linear(in_features=8192, out_features=32768, bias=False)
(act): GELU(approximate='none')
(dense_4h_to_h): Linear(in_features=32768, out_features=8192, bias=False)
)
)
)
(ln_f): LayerNorm((8192,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=8192, out_features=65024, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
TheBloke/Psyfighter-13B-GGUF | TheBloke | "2023-11-08T12:04:44Z" | 890 | 9 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:jebcarter/Psyfighter-13B",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | "2023-11-08T11:55:53Z" | ---
base_model: jebcarter/Psyfighter-13B
inference: false
license: llama2
model_creator: Jeb Carter
model_name: Psyfighter 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Psyfighter 13B - GGUF
- Model creator: [Jeb Carter](https://huggingface.co/jebcarter)
- Original model: [Psyfighter 13B](https://huggingface.co/jebcarter/Psyfighter-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jeb Carter's Psyfighter 13B](https://huggingface.co/jebcarter/Psyfighter-13B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Psyfighter-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Psyfighter-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF)
* [Jeb Carter's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jebcarter/Psyfighter-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [psyfighter-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF/blob/main/psyfighter-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [psyfighter-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF/blob/main/psyfighter-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [psyfighter-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF/blob/main/psyfighter-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [psyfighter-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF/blob/main/psyfighter-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [psyfighter-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF/blob/main/psyfighter-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [psyfighter-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF/blob/main/psyfighter-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [psyfighter-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF/blob/main/psyfighter-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [psyfighter-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF/blob/main/psyfighter-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [psyfighter-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF/blob/main/psyfighter-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [psyfighter-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF/blob/main/psyfighter-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [psyfighter-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF/blob/main/psyfighter-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [psyfighter-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Psyfighter-13B-GGUF/blob/main/psyfighter-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Psyfighter-13B-GGUF and below it, a specific filename to download, such as: psyfighter-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Psyfighter-13B-GGUF psyfighter-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Psyfighter-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Psyfighter-13B-GGUF psyfighter-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m psyfighter-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Psyfighter-13B-GGUF", model_file="psyfighter-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้ฟๆ, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jeb Carter's Psyfighter 13B
```
merge_method: task_arithmetic
base_model: TheBloke/Llama-2-13B-fp16
models:
- model: TheBloke/Llama-2-13B-fp16
- model: KoboldAI/LLaMA2-13B-Tiefighter
parameters:
weight: 1.0
- model: chaoyi-wu/MedLLaMA_13B
parameters:
weight: 0.01
- model: Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged
parameters:
weight: 0.02
dtype: float16
```
This model was made possible thanks to the Compute provided by the KoboldAI community.
<!-- original-model-card end -->
|
indischepartij/MiaLatte-Indo-Mistral-7b | indischepartij | "2024-03-10T03:23:19Z" | 890 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"conversational",
"id",
"en",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-02T05:26:58Z" | ---
language:
- id
- en
license: cc-by-nc-4.0
tags:
- merge
- mergekit
model-index:
- name: MiaLatte-Indo-Mistral-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/MiaLatte-Indo-Mistral-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.23
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/MiaLatte-Indo-Mistral-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/MiaLatte-Indo-Mistral-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.04
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/MiaLatte-Indo-Mistral-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/MiaLatte-Indo-Mistral-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/MiaLatte-Indo-Mistral-7b
name: Open LLM Leaderboard
---

# MiaLatte-Indo-Mistral-7b
MiaLatte is a derivative model of [OpenMia](https://huggingface.co/indischepartij/OpenMia-Indo-Mistral-7b-v2), which is able to answer everyday questions specifically in Bahasa Indonesia (Indonesia Language).
some of GGUF: https://huggingface.co/indischepartij/MiaLatte-Indo-Mistral-7b-GGUF
# Examples



MiaLatte-Indo-Mistral-7b is a merge of the following models using MergeKit:
* [indischepartij/OpenMia-Indo-Mistral-7b-v2](https://huggingface.co/indischepartij/OpenMia-Indo-Mistral-7b-v2)
* [Obrolin/Kesehatan-7B-v0.1](https://huggingface.co/Obrolin/Kesehatan-7B-v0.1)
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
## ๐ช Open LLM Benchmark

## ๐งฉ Configuration
```yaml
slices:
models:
- model: indischepartij/OpenMia-Indo-Mistral-7b-v2
parameters:
density: 0.50
weight: 0.35
- model: Obrolin/Kesehatan-7B-v0.1
parameters:
density: 0.50
weight: 0.35
- model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
density: 0.50
weight: 0.30
merge_method: dare_ties
base_model: indischepartij/OpenMia-Indo-Mistral-7b-v2
parameters:
int8_mask: true
dtype: float16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "indischepartij/MiaLatte-Indo-Mistral-7b"
messages = [{"role": "user", "content": "Apa jenis skincare yang cocok untuk kulit berjerawat??"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_indischepartij__MiaLatte-Indo-Mistral-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.86|
|AI2 Reasoning Challenge (25-Shot)|66.55|
|HellaSwag (10-Shot) |85.23|
|MMLU (5-Shot) |63.93|
|TruthfulQA (0-shot) |56.04|
|Winogrande (5-shot) |80.35|
|GSM8k (5-shot) |55.04|
|
Lewdiculous/InfinityRP-v1-7B-GGUF-IQ-Imatrix | Lewdiculous | "2024-05-04T14:47:26Z" | 890 | 35 | transformers | [
"transformers",
"gguf",
"quantized",
"roleplay",
"imatrix",
"mistral",
"merge",
"nsfw",
"en",
"base_model:ResplendentAI/Datura_7B",
"base_model:ChaoticNeutrals/Eris_Floramix_DPO_7B",
"license:apache-2.0",
"region:us"
] | null | "2024-03-09T02:35:45Z" | ---
library_name: transformers
license: apache-2.0
language:
- en
tags:
- gguf
- quantized
- roleplay
- imatrix
- mistral
- merge
- nsfw
inference: false
base_model:
- ResplendentAI/Datura_7B
- ChaoticNeutrals/Eris_Floramix_DPO_7B
---
> [!TIP]
> **Support:** <br>
> My upload speeds have been cooked and unstable lately. <br>
> Realistically I'd need to move to get a better provider. <br>
> If you **want** and you are able to... <br>
> [**You can support my various endeavors here (Ko-fi).**](https://ko-fi.com/Lewdiculous) <br>
> I apologize for disrupting your experience.
This repository hosts GGUF-Imatrix quantizations for [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B).
The supported --contextsize is 8192.
**What does "Imatrix" mean?**
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
**Steps:**
```
Baseโข GGUF(F16)โข Imatrix-Data(F16)โข GGUF(Imatrix-Quants)
```
**Quants:**
```python
quantization_options = [
"Q4_K_M", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K",
"Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
]
```
If you want anything that's not here or another model, feel free to request.
**This is experimental.**
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt).
**Original model information:**

This is an experimental model I currently use. It's far from great as I'm still working on it, but I leave it here for people to try if interested in this format.
This model was basically made to stop some upsetting hallucinations, so {{char}} mostly and occasionally will wait {{user}} response instead of responding itself or deciding for {{user}}, also, my primary idea was to create a cozy model that thinks.*
Inspired by [lemonilia/Limamono-Mistral-7B-v0.50](https://huggingface.co/lemonilia/Limamono-Mistral-7B-v0.50)
### Style details:
- Quotes are used for character dialogs.
- `"Hey, Anon... What do you think about my style?"`
- Asterisks can be used for narration, but it's optional, it's recommended to use default novel format.
- `*Her cheeks blush slightly, she tries to hide.*`
- Character thoughts are wrapped with ` marks. **This may often spontaneously occur.**
- `My heart skips a beat hearing him call me pretty!`
*If you want thoughts to appear more often, just add something like this to your system prompt: ```"{{char}} internal thoughts are wrapped with ` marks."```*
- Accepted response lengths: ***tiny, short, medium, long, huge***
-
For example: ### Response: (length = medium)
Note: Apparently ***humongous***, ***extreme*** and ***unlimited*** may not work at moment. Not fully tested.
### Prompt format:
Extended Alpaca, as always.
``"You are now in roleplay chat mode. Engage in an endless chat with {{user}}. Always wait {{user}} turn, next actions and responses."``
## Example:

|
tiennlu/GPT2vi_CNNvi_3k | tiennlu | "2024-06-08T11:10:08Z" | 890 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-08T08:39:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NikolayKozloff/RoGemma-7b-Instruct-Q8_0_L-GGUF | NikolayKozloff | "2024-06-30T19:22:22Z" | 890 | 1 | null | [
"gguf",
"text-generation-inference",
"ro",
"region:us"
] | null | "2024-06-30T18:45:50Z" | ---
language:
- ro
tags:
- text-generation-inference
---
Best quality quant created using this instruction: https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/discussions/3#6679c0ce761779cf45d2321b |
M-CLIP/M-BERT-Distil-40 | M-CLIP | "2022-09-15T10:46:02Z" | 889 | 6 | transformers | [
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sq",
"am",
"ar",
"az",
"bn",
"bg",
"ca",
"zh",
"nl",
"en",
"et",
"fa",
"fr",
"ka",
"de",
"el",
"hi",
"hu",
"is",
"id",
"it",
"ja",
"kk",
"ko",
"lv",
"mk",
"ms",
"ps",
"pl",
"ro",
"ru",
"sl",
"es",
"sv",
"tl",
"th",
"tr",
"ur",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2022-03-02T23:29:04Z" | ---
language:
- sq
- am
- ar
- az
- bn
- bg
- ca
- zh
- nl
- en
- et
- fa
- fr
- ka
- de
- el
- hi
- hu
- is
- id
- it
- ja
- kk
- ko
- lv
- mk
- ms
- ps
- pl
- ro
- ru
- sl
- es
- sv
- tl
- th
- tr
- ur
---
<br />
<p align="center">
<h1 align="center">M-BERT Distil 40</h1>
<p align="center">
<a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Distil%2040">Github Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP).
Once this is done, you can load and use the model with the following code
```python
from src import multilingual_clip
model = multilingual_clip.load_model('M-BERT-Distil-40')
embeddings = model(['รlgen รคr skogens konung!', 'Wie leben Eisbรคren in der Antarktis?', 'ะั ะทะฝะฐะปะธ, ััะพ ะฒัะต ะฑะตะปัะต ะผะตะดะฒะตะดะธ ะปะตะฒัะธ?'])
print(embeddings.shape)
# Yields: torch.Size([3, 640])
```
<!-- ABOUT THE PROJECT -->
## About
A [distilbert-base-multilingual](https://huggingface.co/distilbert-base-multilingual-cased) tuned to match the embedding space for [40 languages](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Distil%2040/Fine-Tune-Languages.md), to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
A full list of the 100 languages used during pre-training can be found [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages), and a list of the 40 languages used during fine-tuning can be found in [SupportedLanguages.md](Fine-Tune-Languages.md).
Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into the corresponding language.
All translation was done using the [AWS translate service](https://aws.amazon.com/translate/), the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 40 languages.
## Evaluation
[These results can be viewed at Github](https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Distil%2040). <br>
A non-rigorous qualitative evaluation shows that for the languages French, German, Spanish, Russian, Swedish and Greek it seemingly yields respectable results for most instances. The exception being that Greeks are apparently unable to recognize happy persons. <br>
When testing on Kannada, a language which was included during pre-training but not fine-tuning, it performed close to random
|
retrieva-jp/t5-small-short | retrieva-jp | "2023-05-10T00:55:39Z" | 889 | 2 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ja",
"arxiv:2002.05202",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2023-04-25T04:37:20Z" | ---
license: cc-by-sa-4.0
language:
- ja
---
# Model card for model ID
This is a T5 v1.1 model, pre-trained on a Japanese corpus.
## Model details
T5 is a Transformer-based Encoder-Decoder model, now in v1.1, with the following improvements over the original T5.
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see https://arxiv.org/abs/2002.05202 .
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
- no parameter sharing between embedding and classifier layer
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger d_model and smaller num_heads and d_ff.
This model is based on T5 v1.1. It was pre-trained on a Japanese corpus. For the Japanese corpus, Japanese Wikipedia and mC4/ja were used.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Retrieva, Inc.
- **Model type:** T5 v1.1
- **Language(s) (NLP):** Japanese
- **License:** CC-BY-SA 4.0 Although commercial use is permitted, we kindly request that you contact us beforehand.
## Training Details
We use T5X (https://github.com/google-research/t5x) for the training of this model, and it has been converted to the Huggingface transformer format.
## Training Data
The training data used is
- The Japanese part of the multilingual C4(mC4/ja).
- Japanese Wikipedia(20220920).
#### Preprocessing
The following filtering is done
- Remove documents that do not use a single hiragana character. This removes English-only documents and documents in Chinese.
- Whitelist-style filtering using the top level domain of URL to remove affiliate sites.
#### Training Hyperparameters
- dropout rate: 0.0
- batch size: 256
- fp32
- input length: 512
- output length: 114
- Otherwise, the default value of T5X (https://github.com/google-research/t5x/blob/main/t5x/examples/t5/t5_1_1/small.gin) is followed, including the following.
- optimizer: Adafactor
- base_learning_rate: 1.0
- warmup steps: 10000
#### Speeds, Sizes, Times
We trained 589824 steps.
## Technical Specifications
### Model Architecture and Objective
Model architecture.
- T5 v1.1(https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511)
- Size: Small(~77 million parameters)
### Compute Infrastructure
Google Cloud TPU v4-8.
#### Software
- T5X(https://github.com/google-research/t5x).
## More Information
https://note.com/retrieva/n/n7b4186dc5ada (in Japanese)
## Model Card Authors
Jiro Nishitoba
## Model Card Contact
[email protected]
|
shivanikerai/TinyLlama-1.1B-Chat-v1.0-sku-title-ner-generation-reversed-v1.0 | shivanikerai | "2024-03-27T06:19:41Z" | 889 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-27T06:17:33Z" | Entry not found |
tyfeng1997/llama3-8b-instruct-text-to-sql | tyfeng1997 | "2024-04-22T23:14:59Z" | 889 | 3 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:b-mc2/sql-create-context",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2024-04-22T19:50:35Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- b-mc2/sql-create-context
model-index:
- name: llama3-8b-instruct-text-to-sql
results: []
metrics:
- accuracy 79.90
language:
- en
---
# llama3-8b-instruct-text-to-sql
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
### Train jupyter notebook
[my github](https://github.com/bofen97/llama3-8b-instruct-text-to-sql/blob/main/llama3-8b-instruct-text-to-sql.ipynb)
### Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "tyfeng1997/llama3-8b-instruct-text-to-sql"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA.\nSCHEMA:\nCREATE TABLE match_season (College VARCHAR, POSITION VARCHAR)"},
{"role": "user", "content": "Which college have both players with position midfielder and players with position defender?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0]
print(tokenizer.decode(response, skip_special_tokens=True))
#
#system
#You are an text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA.
#SCHEMA:
#CREATE TABLE match_season (College VARCHAR, POSITION VARCHAR)
#user
#Which college have both players with position midfielder and players with position defender?
#assistant
#SELECT College FROM match_season WHERE POSITION = "Midfielder" INTERSECT SELECT College FROM match_season WHERE POSITION = "Defender"
#
``` |
timm/vit_large_patch14_clip_336.laion2b_ft_in12k_in1k | timm | "2023-05-06T00:14:18Z" | 888 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:laion-2b",
"dataset:imagenet-12k",
"arxiv:2212.07143",
"arxiv:2210.08402",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-11-02T18:59:19Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- laion-2b
- imagenet-12k
---
# Model card for vit_large_patch14_clip_336.laion2b_ft_in12k_in1k
A Vision Transformer (ViT) image classification model. Pretrained on LAION-2B image-text pairs using OpenCLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 304.5
- GMACs: 174.7
- Activations (M): 128.2
- Image size: 336 x 336
- **Papers:**
- OpenCLIP: https://github.com/mlfoundations/open_clip
- Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:**
- LAION-2B
- ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_large_patch14_clip_336.laion2b_ft_in12k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_large_patch14_clip_336.laion2b_ft_in12k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 1024) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
recogna-nlp/phi-bode-2-ultraalpaca | recogna-nlp | "2024-04-30T20:33:28Z" | 888 | 1 | transformers | [
"transformers",
"pytorch",
"phi",
"text-generation",
"custom_code",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-09T03:49:10Z" | ---
license: mit
model-index:
- name: phi-bode-2-ultraalpaca
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 38.35
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 25.17
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 29.61
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 45.39
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 24.43
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 43.97
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 54.15
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 54.59
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia-temp/tweetsentbr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 43.34
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/phi-bode-2-ultraalpaca
name: Open Portuguese LLM Leaderboard
---
# Phi-Bode
<!--- PROJECT LOGO -->
<p align="center">
<img src="https://huggingface.co/recogna-nlp/Phi-Bode/resolve/main/phi-bode.jpg" alt="Phi-Bode Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
Phi-Bode รฉ um modelo de linguagem ajustado para o idioma portuguรชs, desenvolvido a partir do modelo base Phi-2B fornecido pela [Microsoft](https://huggingface.co/microsoft/phi-2). Este modelo foi refinado atravรฉs do processo de fine-tuning utilizando o dataset UltraAlpaca. O principal objetivo deste modelo รฉ ser viรกvel para pessoas
que nรฃo possuem recursos computacionais disponรญveis para o uso de LLMs (Large Language Models). Ressalta-se que este รฉ um trabalho em andamento e o modelo ainda apresenta problemas na geraรงรฃo de texto em portuguรชs.
## Caracterรญsticas Principais
- **Modelo Base:** Phi-2B, criado pela Microsoft, com 2.7 bilhรตes de parรขmetros.
- **Dataset para Fine-tuning:** UltraAlpaca
- **Treinamento:** O treinamento foi realizado a partir do fine-tuning completo do phi-2.
# [Open Portuguese LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/recogna-nlp/phi-bode-2-ultraalpaca)
| Metric | Value |
|--------------------------|---------|
|Average |**39.89**|
|ENEM Challenge (No Images)| 38.35|
|BLUEX (No Images) | 25.17|
|OAB Exams | 29.61|
|Assin2 RTE | 45.39|
|Assin2 STS | 24.43|
|FaQuAD NLI | 43.97|
|HateBR Binary | 54.15|
|PT Hate Speech Binary | 54.59|
|tweetSentBR | 43.34|
|
mradermacher/Fimburs11V3-i1-GGUF | mradermacher | "2024-06-01T16:28:06Z" | 888 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/Fimburs11V3",
"endpoints_compatible",
"region:us"
] | null | "2024-05-31T20:29:42Z" | ---
base_model: mergekit-community/Fimburs11V3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mergekit-community/Fimburs11V3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Fimburs11V3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fimburs11V3-i1-GGUF/resolve/main/Fimburs11V3.i1-Q6_K.gguf) | i1-Q6_K | 8.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Omartificial-Intelligence-Space/Arabic-all-nli-triplet-Matryoshka | Omartificial-Intelligence-Space | "2024-06-26T20:22:10Z" | 888 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"mteb",
"ar",
"dataset:Omartificial-Intelligence-Space/Arabic-NLi-Triplet",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2024-06-14T17:54:05Z" | ---
inference: false
language:
- ar
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
- mteb
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
datasets:
- Omartificial-Intelligence-Space/Arabic-NLi-Triplet
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
widget:
- source_sentence: ุฐูุฑ ู
ุชูุงุฒู ุจุนูุงูุฉ ููู ุนูู ูุฏู
ูุงุญุฏุฉ ุจุงููุฑุจ ู
ู ู
ูุทูุฉ ุดุงุทุฆ ุงูู
ุญูุท ุงููุธููุฉ
sentences:
- ุฑุฌู ููุฏู
ุนุฑุถุงู
- ููุงู ุฑุฌู ุจุงูุฎุงุฑุฌ ูุฑุจ ุงูุดุงุทุฆ
- ุฑุฌู ูุฌูุณ ุนูู ุฃุฑููู
- source_sentence: ุฑุฌู ูููุฒ ุฅูู ุณุฑูุฑู ุงููุฐุฑ
sentences:
- ุงูุณุฑูุฑ ูุฐุฑ.
- ุฑุฌู ูุถุญู ุฃุซูุงุก ุบุณูู ุงูู
ูุงุจุณ
- ุงูุฑุฌู ุนูู ุงููู
ุฑ
- source_sentence: ุงููุชูุงุช ุจุงูุฎุงุฑุฌ
sentences:
- ุงู
ุฑุฃุฉ ุชูู ุงูุฎูุท ุฅูู ูุฑุงุช ุจุฌุงูุจ ููู
ุฉ ู
ู ุงููุฑุงุช
- ูุชูุงู ูุฑูุจุงู ูู ุฌููุฉ ู
ุชุนุฉ
- >-
ุซูุงุซ ูุชูุงุช ููููู ุณููุฉ ูู ุบุฑูุฉ ูุงุญุฏุฉ ุชุณุชู
ุน ููุงุญุฏุฉ ุชูุชุจ ุนูู ุงูุญุงุฆุท ูุงูุซุงูุซุฉ
ุชุชุญุฏุซ ุฅูููู
- source_sentence: ุงูุฑุฌู ูุฑุชุฏู ูู
ูุตุงู ุฃุฒุฑู.
sentences:
- >-
ุฑุฌู ูุฑุชุฏู ูู
ูุตุงู ุฃุฒุฑู ูู
ูู ุฅูู ุงูุฌุฏุงุฑ ุจุฌุงูุจ ุงูุทุฑูู ู
ุน ุดุงุญูุฉ ุฒุฑูุงุก ูุณูุงุฑุฉ
ุญู
ุฑุงุก ู
ุน ุงูู
ุงุก ูู ุงูุฎูููุฉ.
- ูุชุงุจ ุงููุตุต ู
ูุชูุญ
- ุฑุฌู ูุฑุชุฏู ูู
ูุต ุฃุณูุฏ ูุนุฒู ุนูู ุงูุฌูุชุงุฑ.
- source_sentence: ูุฌูุณ ุดุงุจ ุฐู ุดุนุฑ ุฃุดูุฑ ุนูู ุงูุญุงุฆุท ููุฑุฃ ุฌุฑูุฏุฉ ุจููู
ุง ุชู
ุฑ ุงู
ุฑุฃุฉ ููุชุงุฉ ุดุงุจุฉ.
sentences:
- ุฐูุฑ ุดุงุจ ููุธุฑ ุฅูู ุฌุฑูุฏุฉ ุจููู
ุง ุชู
ุฑ ุฅู
ุฑุฃุชุงู ุจุฌุงูุจู
- ุฑุฌู ูุณุชููู ุนูู ูุฌูู ุนูู ู
ูุนุฏ ูู ุงูุญุฏููุฉ.
- ุงูุดุงุจ ูุงุฆู
ุจููู
ุง ุงูุฃู
ุชููุฏ ุงุจูุชูุง ุฅูู ุงูุญุฏููุฉ
pipeline_tag: sentence-similarity
model-index:
- name: Omartificial-Intelligence-Space/Arabic-all-nli-triplet-Matryoshka
results:
- dataset:
config: default
name: MTEB BIOSSES (default)
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: cosine_pearson
value: 81.20578037912223
- type: cosine_spearman
value: 77.43670420687278
- type: euclidean_pearson
value: 74.60444698819703
- type: euclidean_spearman
value: 72.25767053642666
- type: main_score
value: 77.43670420687278
- type: manhattan_pearson
value: 73.86951335383257
- type: manhattan_spearman
value: 71.41608509527123
task:
type: STS
- dataset:
config: default
name: MTEB SICK-R (default)
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
split: test
type: mteb/sickr-sts
metrics:
- type: cosine_pearson
value: 83.11155556919923
- type: cosine_spearman
value: 79.39435627520159
- type: euclidean_pearson
value: 81.05225024180342
- type: euclidean_spearman
value: 79.09926890001618
- type: main_score
value: 79.39435627520159
- type: manhattan_pearson
value: 80.74351302609706
- type: manhattan_spearman
value: 78.826254748334
task:
type: STS
- dataset:
config: default
name: MTEB STS12 (default)
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: cosine_pearson
value: 85.10074960888633
- type: cosine_spearman
value: 78.93043293576132
- type: euclidean_pearson
value: 84.1168219787408
- type: euclidean_spearman
value: 78.44739559202252
- type: main_score
value: 78.93043293576132
- type: manhattan_pearson
value: 83.79447841594396
- type: manhattan_spearman
value: 77.94028171700384
task:
type: STS
- dataset:
config: default
name: MTEB STS13 (default)
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: cosine_pearson
value: 81.34459901517775
- type: cosine_spearman
value: 82.73032633919925
- type: euclidean_pearson
value: 82.83546499367434
- type: euclidean_spearman
value: 83.29701673615389
- type: main_score
value: 82.73032633919925
- type: manhattan_pearson
value: 82.63480502797324
- type: manhattan_spearman
value: 83.05016589615636
task:
type: STS
- dataset:
config: default
name: MTEB STS14 (default)
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: cosine_pearson
value: 82.53179983763488
- type: cosine_spearman
value: 81.64974497557361
- type: euclidean_pearson
value: 83.03981070806898
- type: euclidean_spearman
value: 82.65556168300631
- type: main_score
value: 81.64974497557361
- type: manhattan_pearson
value: 82.83722360191446
- type: manhattan_spearman
value: 82.4164264119
task:
type: STS
- dataset:
config: default
name: MTEB STS15 (default)
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: cosine_pearson
value: 86.5684162475647
- type: cosine_spearman
value: 87.62163215009723
- type: euclidean_pearson
value: 87.3068288651339
- type: euclidean_spearman
value: 88.03508640722863
- type: main_score
value: 87.62163215009723
- type: manhattan_pearson
value: 87.21818681800193
- type: manhattan_spearman
value: 87.94690511382603
task:
type: STS
- dataset:
config: default
name: MTEB STS16 (default)
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: cosine_pearson
value: 81.70518105237446
- type: cosine_spearman
value: 83.66083698795428
- type: euclidean_pearson
value: 82.80400684544435
- type: euclidean_spearman
value: 83.39926895275799
- type: main_score
value: 83.66083698795428
- type: manhattan_pearson
value: 82.44430538731845
- type: manhattan_spearman
value: 82.99600783826028
task:
type: STS
- dataset:
config: ar-ar
name: MTEB STS17 (ar-ar)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 82.23229967696153
- type: cosine_spearman
value: 82.40039006538706
- type: euclidean_pearson
value: 79.21322872573518
- type: euclidean_spearman
value: 79.14230529579783
- type: main_score
value: 82.40039006538706
- type: manhattan_pearson
value: 79.1476348987964
- type: manhattan_spearman
value: 78.82381660638143
task:
type: STS
- dataset:
config: ar
name: MTEB STS22 (ar)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 45.95767124518871
- type: cosine_spearman
value: 51.37922888872568
- type: euclidean_pearson
value: 45.519471121310126
- type: euclidean_spearman
value: 51.45605803385654
- type: main_score
value: 51.37922888872568
- type: manhattan_pearson
value: 45.98761117909666
- type: manhattan_spearman
value: 51.48451973989366
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark (default)
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: cosine_pearson
value: 85.38916827757183
- type: cosine_spearman
value: 86.16303183485594
- type: euclidean_pearson
value: 85.16406897245115
- type: euclidean_spearman
value: 85.40364087457081
- type: main_score
value: 86.16303183485594
- type: manhattan_pearson
value: 84.96853193915084
- type: manhattan_spearman
value: 85.13238442843544
task:
type: STS
- dataset:
config: default
name: MTEB SummEval (default)
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: cosine_pearson
value: 30.077426987171158
- type: cosine_spearman
value: 30.163682020271608
- type: dot_pearson
value: 27.31125295906803
- type: dot_spearman
value: 29.138235153208193
- type: main_score
value: 30.163682020271608
- type: pearson
value: 30.077426987171158
- type: spearman
value: 30.163682020271608
task:
type: Summarization
- name: >-
SentenceTransformer based on
sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 768
type: sts-test-768
metrics:
- type: pearson_cosine
value: 0.8538831619509135
name: Pearson Cosine
- type: spearman_cosine
value: 0.861625750018802
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8496745674597512
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8513333417508545
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8516261261374778
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8540549341060195
name: Spearman Euclidean
- type: pearson_dot
value: 0.7281308266536204
name: Pearson Dot
- type: spearman_dot
value: 0.7230282720855726
name: Spearman Dot
- type: pearson_max
value: 0.8538831619509135
name: Pearson Max
- type: spearman_max
value: 0.861625750018802
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 512
type: sts-test-512
metrics:
- type: pearson_cosine
value: 0.8542379189261009
name: Pearson Cosine
- type: spearman_cosine
value: 0.8609329396560859
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8486657899695456
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8512120732504748
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8505249483849495
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8538738365440234
name: Spearman Euclidean
- type: pearson_dot
value: 0.7075618032859148
name: Pearson Dot
- type: spearman_dot
value: 0.7028728329509918
name: Spearman Dot
- type: pearson_max
value: 0.8542379189261009
name: Pearson Max
- type: spearman_max
value: 0.8609329396560859
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 256
type: sts-test-256
metrics:
- type: pearson_cosine
value: 0.8486308733045101
name: Pearson Cosine
- type: spearman_cosine
value: 0.8578681811996274
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8404506123980291
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.845565163232125
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8414758099131773
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8471566121478254
name: Spearman Euclidean
- type: pearson_dot
value: 0.6668664182302968
name: Pearson Dot
- type: spearman_dot
value: 0.6651222481800894
name: Spearman Dot
- type: pearson_max
value: 0.8486308733045101
name: Pearson Max
- type: spearman_max
value: 0.8578681811996274
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 128
type: sts-test-128
metrics:
- type: pearson_cosine
value: 0.8389761445410956
name: Pearson Cosine
- type: spearman_cosine
value: 0.8499312736457453
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8287388421834582
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8353046807483782
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8297699263897746
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8371843253238523
name: Spearman Euclidean
- type: pearson_dot
value: 0.5855876200722326
name: Pearson Dot
- type: spearman_dot
value: 0.5834920267418124
name: Spearman Dot
- type: pearson_max
value: 0.8389761445410956
name: Pearson Max
- type: spearman_max
value: 0.8499312736457453
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 64
type: sts-test-64
metrics:
- type: pearson_cosine
value: 0.8290685425698586
name: Pearson Cosine
- type: spearman_cosine
value: 0.8429054799136109
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8100968316314205
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8221121550434057
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8129044863346081
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8255133471709527
name: Spearman Euclidean
- type: pearson_dot
value: 0.5067257944655903
name: Pearson Dot
- type: spearman_dot
value: 0.5109761436588146
name: Spearman Dot
- type: pearson_max
value: 0.8290685425698586
name: Pearson Max
- type: spearman_max
value: 0.8429054799136109
name: Spearman Max
license: apache-2.0
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 79f2382ceacceacdf38563d7c5d16b9ff8d725d6 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- Omartificial-Intelligence-Space/arabic-n_li-triplet
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/Arabic-Nli-Matryoshka")
# Run inference
sentences = [
'ูุฌูุณ ุดุงุจ ุฐู ุดุนุฑ ุฃุดูุฑ ุนูู ุงูุญุงุฆุท ููุฑุฃ ุฌุฑูุฏุฉ ุจููู
ุง ุชู
ุฑ ุงู
ุฑุฃุฉ ููุชุงุฉ ุดุงุจุฉ.',
'ุฐูุฑ ุดุงุจ ููุธุฑ ุฅูู ุฌุฑูุฏุฉ ุจููู
ุง ุชู
ุฑ ุฅู
ุฑุฃุชุงู ุจุฌุงูุจู',
'ุงูุดุงุจ ูุงุฆู
ุจููู
ุง ุงูุฃู
ุชููุฏ ุงุจูุชูุง ุฅูู ุงูุญุฏููุฉ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8539 |
| **spearman_cosine** | **0.8616** |
| pearson_manhattan | 0.8497 |
| spearman_manhattan | 0.8513 |
| pearson_euclidean | 0.8516 |
| spearman_euclidean | 0.8541 |
| pearson_dot | 0.7281 |
| spearman_dot | 0.723 |
| pearson_max | 0.8539 |
| spearman_max | 0.8616 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8542 |
| **spearman_cosine** | **0.8609** |
| pearson_manhattan | 0.8487 |
| spearman_manhattan | 0.8512 |
| pearson_euclidean | 0.8505 |
| spearman_euclidean | 0.8539 |
| pearson_dot | 0.7076 |
| spearman_dot | 0.7029 |
| pearson_max | 0.8542 |
| spearman_max | 0.8609 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8486 |
| **spearman_cosine** | **0.8579** |
| pearson_manhattan | 0.8405 |
| spearman_manhattan | 0.8456 |
| pearson_euclidean | 0.8415 |
| spearman_euclidean | 0.8472 |
| pearson_dot | 0.6669 |
| spearman_dot | 0.6651 |
| pearson_max | 0.8486 |
| spearman_max | 0.8579 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.839 |
| **spearman_cosine** | **0.8499** |
| pearson_manhattan | 0.8287 |
| spearman_manhattan | 0.8353 |
| pearson_euclidean | 0.8298 |
| spearman_euclidean | 0.8372 |
| pearson_dot | 0.5856 |
| spearman_dot | 0.5835 |
| pearson_max | 0.839 |
| spearman_max | 0.8499 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8291 |
| **spearman_cosine** | **0.8429** |
| pearson_manhattan | 0.8101 |
| spearman_manhattan | 0.8221 |
| pearson_euclidean | 0.8129 |
| spearman_euclidean | 0.8255 |
| pearson_dot | 0.5067 |
| spearman_dot | 0.511 |
| pearson_max | 0.8291 |
| spearman_max | 0.8429 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 10.33 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.21 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.32 tokens</li><li>max: 53 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------|:--------------------------------------------|:------------------------------------|
| <code>ุดุฎุต ุนูู ุญุตุงู ูููุฒ ููู ุทุงุฆุฑุฉ ู
ุนุทูุฉ</code> | <code>ุดุฎุต ูู ุงูููุงุก ุงูุทููุ ุนูู ุญุตุงู.</code> | <code>ุดุฎุต ูู ู
ุทุนู
ุ ูุทูุจ ุนุฌุฉ.</code> |
| <code>ุฃุทูุงู ูุจุชุณู
ูู ู ูููุญูู ูููุงู
ูุฑุง</code> | <code>ููุงู ุฃุทูุงู ุญุงุถุฑูู</code> | <code>ุงูุงุทูุงู ูุชุฌูู
ูู</code> |
| <code>ุตุจู ูููุฒ ุนูู ููุญ ุงูุชุฒูุฌ ูู ู
ูุชุตู ุงูุฌุณุฑ ุงูุฃุญู
ุฑ.</code> | <code>ุงููุชู ูููู
ุจุฎุฏุนุฉ ุงูุชุฒูุฌ</code> | <code>ุงูุตุจู ูุชุฒูุฌ ุนูู ุงูุฑุตูู</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 21.86 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.22 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.2 tokens</li><li>max: 33 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|:---------------------------------------------------|
| <code>ุงู
ุฑุฃุชุงู ูุชุนุงููุงู ุจููู
ุง ูุญู
ูุงู ุญุฒู
ุฉ</code> | <code>ุฅู
ุฑุฃุชุงู ูุญู
ูุงู ุญุฒู
ุฉ</code> | <code>ุงูุฑุฌุงู ูุชุดุงุฌุฑูู ุฎุงุฑุฌ ู
ุทุนู
</code> |
| <code>ุทูููู ุตุบูุฑูู ูุฑุชุฏูุงู ูู
ูุตุงู ุฃุฒุฑูุ ุฃุญุฏูู
ุง ูุฑุชุฏู ุงูุฑูู
9 ูุงูุขุฎุฑ ูุฑุชุฏู ุงูุฑูู
2 ูููุงู ุนูู ุฎุทูุงุช ุฎุดุจูุฉ ูู ุงูุญู
ุงู
ููุบุณูุงู ุฃูุฏููู
ุง ูู ุงูู
ุบุณูุฉ.</code> | <code>ุทูููู ูุฑุชุฏูุงู ูู
ูุตุงู ู
ุฑูู
ุงู ูุบุณููู ุฃูุฏููู
</code> | <code>ุทูููู ูุฑุชุฏูุงู ุณุชุฑุฉ ูุฐูุจุงู ุฅูู ุงูู
ุฏุฑุณุฉ</code> |
| <code>ุฑุฌู ูุจูุน ุงูุฏููุงุช ูุนู
ูู ุฎูุงู ู
ุนุฑุถ ุนุงูู
ู ุฃููู
ูู ู
ุฏููุฉ ุฃูุฌููุณ</code> | <code>ุฑุฌู ูุจูุน ุงูุฏููุงุช ูุนู
ูู</code> | <code>ุงู
ุฑุฃุฉ ุชุดุฑุจ ูููุชูุง ูู ู
ููู ุตุบูุฑ</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-512_spearman_cosine | sts-test-64_spearman_cosine | sts-test-768_spearman_cosine |
|:------:|:----:|:-------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:----------------------------:|
| 0.2294 | 500 | 10.1279 | - | - | - | - | - |
| 0.4587 | 1000 | 8.0384 | - | - | - | - | - |
| 0.6881 | 1500 | 7.3484 | - | - | - | - | - |
| 0.9174 | 2000 | 4.2216 | - | - | - | - | - |
| 1.0 | 2180 | - | 0.8499 | 0.8579 | 0.8609 | 0.8429 | 0.8616 |
### Framework Versions
- Python: 3.9.18
- Sentence Transformers: 3.0.1
- Transformers: 4.40.0
- PyTorch: 2.2.2+cu121
- Accelerate: 0.26.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
facebook/wav2vec2-xlsr-53-phon-cv-ft | facebook | "2021-11-10T11:59:01Z" | 887 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | Entry not found |
spacy/ru_core_news_sm | spacy | "2023-10-10T06:36:38Z" | 887 | 3 | spacy | [
"spacy",
"token-classification",
"ru",
"license:mit",
"model-index",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ---
tags:
- spacy
- token-classification
language:
- ru
license: mit
model-index:
- name: ru_core_news_sm
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9487739335
- name: NER Recall
type: recall
value: 0.9508500252
- name: NER F Score
type: f_score
value: 0.9498108449
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.987696514
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.987696514
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9702812464
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 2.15295e-05
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.9586955101
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.946180635
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9988584475
---
### Details: https://spacy.io/models/ru#ru_core_news_sm
Russian pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer.
| Feature | Description |
| --- | --- |
| **Name** | `ru_core_news_sm` |
| **Version** | `3.7.0` |
| **spaCy** | `>=3.7.0,<3.8.0` |
| **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Components** | `tok2vec`, `morphologizer`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Nerus](https://github.com/natasha/nerus) (Alexander Kukushkin) |
| **License** | `MIT` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (900 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`morphologizer`** | `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Acc\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Degree=Pos\|POS=ADV`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=DET`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=SCONJ`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Acc\|POS=NUM`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=CCONJ`, `Case=Nom\|POS=NUM`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Gender=Masc\|Number=Sing\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Number=Plur\|POS=ADJ\|StyleVariant=Short`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Number=Plur\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Cnd\|POS=SCONJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=Third`, `POS=PART\|Polarity=Neg`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Mid`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=SPACE`, `Case=Nom\|Number=Plur\|POS=DET`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=PRON`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=INTJ`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Nom\|Number=Plur\|POS=PRON`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|StyleVariant=Short`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Gen\|POS=PRON`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET`, `Case=Nom\|POS=PRON`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=First`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|POS=AUX`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=First`, `Case=Gen\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Mid`, `Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET`, `POS=PART`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|StyleVariant=Short`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Aspect=Perf\|Gender=Neut\|Number=Sing\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|POS=NUM`, `Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=PRON\|Person=Third`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Dat\|POS=PRON`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=Third`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|StyleVariant=Short`, `Degree=Cmp\|POS=ADV`, `Aspect=Perf\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Plur\|POS=DET`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=First\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Gender=Fem\|Number=Sing\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Conv\|Voice=Act`, `Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=Second`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET`, `POS=ADV`, `Case=Acc\|POS=PRON`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Ins\|POS=NUM`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Perf\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=PRON`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=Second`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Second\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `POS=SYM`, `Degree=Cmp\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|POS=NUM`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Fem\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Degree=Pos\|POS=ADJ`, `Case=Ins\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=Third`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=PRON`, `Animacy=Anim\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=PUNCT\|StyleVariant=Short`, `Case=Ins\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=SCONJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=First`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Second\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `POS=NOUN`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=Third`, `Degree=Cmp\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Number=Plur\|POS=DET`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Anim\|Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|POS=NUM`, `Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Par\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Gen\|Number=Plur\|POS=DET\|Person=Third`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADV`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|POS=NUM`, `Aspect=Imp\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Anim\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `POS=ADV\|Polarity=Neg`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|POS=NUM`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=First`, `Case=Nom\|Gender=Neut\|POS=NUM`, `Case=Gen\|POS=VERB\|Polarity=Neg`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Second\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Number=Plur\|POS=PRON`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=Third`, `Case=Gen\|Number=Plur\|POS=PRON`, `Aspect=Perf\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `POS=CCONJ\|Polarity=Neg`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=PRON\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv\|Voice=Mid`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=Second`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Second\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Acc\|POS=NUM`, `Aspect=Imp\|Number=Plur\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Loc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=First`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=First`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=Second`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=First`, `Foreign=Yes\|POS=PUNCT`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=PRON\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=Third`, `Case=Dat\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=NUM`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Number=Plur\|POS=DET`, `Aspect=Imp\|POS=AUX\|Tense=Pres\|VerbForm=Conv\|Voice=Act`, `Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|POS=PRON`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `POS=PROPN`, `Aspect=Perf\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Second\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=Second`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Anim\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=First`, `Aspect=Imp\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=Third`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PUNCT`, `Animacy=Anim\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=PRON\|Person=First`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=Second`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=PRON`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Gen\|Number=Plur\|POS=DET`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Ins\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADV`, `Foreign=Yes\|POS=PART`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=First\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=DET`, `Case=Loc\|Gender=Fem\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Conv\|Voice=Mid`, `Aspect=Imp\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PUNCT`, `Animacy=Anim\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=PUNCT`, `Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=ADV`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=First\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=First`, _(truncated: full list in pipeline meta)_ |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `expl`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `iobj`, `list`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `nummod:entity`, `nummod:gov`, `obj`, `obl`, `obl:agent`, `orphan`, `parataxis`, `punct`, `xcomp` |
| **`ner`** | `LOC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.68 |
| `TOKEN_P` | 97.28 |
| `TOKEN_R` | 98.31 |
| `TOKEN_F` | 97.79 |
| `POS_ACC` | 98.77 |
| `MORPH_ACC` | 97.03 |
| `MORPH_MICRO_P` | 98.68 |
| `MORPH_MICRO_R` | 97.98 |
| `MORPH_MICRO_F` | 98.33 |
| `SENTS_P` | 99.89 |
| `SENTS_R` | 99.89 |
| `SENTS_F` | 99.89 |
| `DEP_UAS` | 95.87 |
| `DEP_LAS` | 94.62 |
| `TAG_ACC` | 98.77 |
| `LEMMA_ACC` | 0.00 |
| `ENTS_P` | 94.88 |
| `ENTS_R` | 95.09 |
| `ENTS_F` | 94.98 | |
tsdocode/text-to-sql | tsdocode | "2021-09-03T06:21:03Z" | 887 | 2 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | Simple text to SQL |
Helsinki-NLP/opus-mt-tc-big-en-ko | Helsinki-NLP | "2023-10-10T10:29:58Z" | 887 | 10 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"en",
"ko",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-08-12T08:02:12Z" | ---
language:
- en
- ko
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-ko
results:
- task:
name: Translation eng-kor
type: translation
args: eng-kor
dataset:
name: flores101-devtest
type: flores_101
args: eng kor devtest
metrics:
- name: BLEU
type: bleu
value: 13.7
- name: chr-F
type: chrf
value: 0.36399
---
# opus-mt-tc-big-en-ko
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from English (en) to Korean (ko).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2022-07-28
- **License:** CC-BY-4.0
- **Language(s):**
- Source Language(s):
- Target Language(s):
- Valid Target Language Labels:
- **Original Model**: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip)
- **Resources for more information:**
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- More information about released models for this language pair: [OPUS-MT eng-kor README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-kor/README.md)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>><<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"2, 4, 6 etc. are even numbers.",
"Yes."
]
model_name = "pytorch-models/opus-mt-tc-big-en-ko"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# 2, 4, 6 ๋ฑ์ ์ง์์
๋๋ค.
# ๊ทธ๋
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-ko")
print(pipe("2, 4, 6 etc. are even numbers."))
# expected output: 2, 4, 6 ๋ฑ์ ์ง์์
๋๋ค.
```
## Training
- **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* test set translations: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.test.txt)
* test set scores: [opusTCv20210807-sepvoc_transformer-big_2022-07-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807-sepvoc_transformer-big_2022-07-28.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
## Citation Information
* Publications: [OPUS-MT โ Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge โ Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Unionโs Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Unionโs Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 8b9f0b0
* port time: Fri Aug 12 11:02:03 EEST 2022
* port machine: LM0-400-22516.local
|
timm/vgg11.tv_in1k | timm | "2023-04-25T20:04:40Z" | 887 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1409.1556",
"license:bsd-3-clause",
"region:us"
] | image-classification | "2023-04-25T20:02:55Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: bsd-3-clause
datasets:
- imagenet-1k
---
# Model card for vgg11.tv_in1k
A VGG image classification model. Trained on ImageNet-1k, original torchvision weights.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 132.9
- GMACs: 7.6
- Activations (M): 7.4
- Image size: 224 x 224
- **Papers:**
- Very Deep Convolutional Networks for Large-Scale Image Recognition: https://arxiv.org/abs/1409.1556
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/pytorch/vision
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vgg11.tv_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vgg11.tv_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 224, 224])
# torch.Size([1, 128, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vgg11.tv_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{Simonyan2014VeryDC,
title={Very Deep Convolutional Networks for Large-Scale Image Recognition},
author={Karen Simonyan and Andrew Zisserman},
journal={CoRR},
year={2014},
volume={abs/1409.1556}
}
```
|
antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR | antoinelouis | "2024-03-26T14:22:59Z" | 887 | 2 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"passage-retrieval",
"sentence-similarity",
"fr",
"dataset:unicamp-dl/mmarco",
"base_model:nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | "2023-05-22T20:59:25Z" | ---
pipeline_tag: sentence-similarity
language: fr
license: mit
datasets:
- unicamp-dl/mmarco
metrics:
- recall
tags:
- passage-retrieval
library_name: sentence-transformers
base_model: nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large
model-index:
- name: biencoder-mMiniLMv2-L12-mmarcoFR
results:
- task:
type: sentence-similarity
name: Passage Retrieval
dataset:
type: unicamp-dl/mmarco
name: mMARCO-fr
config: french
split: validation
metrics:
- type: recall_at_500
name: Recall@500
value: 84.4
- type: recall_at_100
name: Recall@100
value: 71.5
- type: recall_at_10
name: Recall@10
value: 45.4
- type: map_at_10
name: MAP@10
value: 24.2
- type: ndcg_at_10
name: nDCG@10
value: 29.4
- type: mrr_at_10
name: MRR@10
value: 24.7
---
# biencoder-mMiniLMv2-L12-mmarcoFR
This is a dense single-vector bi-encoder model for **French** that can be used for semantic search. The model maps queries and passages to 384-dimensional dense vectors which are used to compute relevance through cosine similarity.
## Usage
Here are some examples for using the model with [Sentence-Transformers](#using-sentence-transformers), [FlagEmbedding](#using-flagembedding), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using Sentence-Transformers
Start by installing the [library](https://www.SBERT.net): `pip install -U sentence-transformers`. Then, you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
queries = ["Ceci est un exemple de requรชte.", "Voici un second exemple."]
passages = ["Ceci est un exemple de passage.", "Et voilร un deuxiรจme exemple."]
model = SentenceTransformer('antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR')
q_embeddings = model.encode(queries, normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
similarity = q_embeddings @ p_embeddings.T
print(similarity)
```
#### Using FlagEmbedding
Start by installing the [library](https://github.com/FlagOpen/FlagEmbedding/): `pip install -U FlagEmbedding`. Then, you can use the model like this:
```python
from FlagEmbedding import FlagModel
queries = ["Ceci est un exemple de requรชte.", "Voici un second exemple."]
passages = ["Ceci est un exemple de passage.", "Et voilร un deuxiรจme exemple."]
model = FlagModel('antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR')
q_embeddings = model.encode(queries, normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
similarity = q_embeddings @ p_embeddings.T
print(similarity)
```
#### Using Transformers
Start by installing the [library](https://huggingface.co/docs/transformers): `pip install -U transformers`. Then, you can use the model like this:
```python
from transformers import AutoTokenizer, AutoModel
from torch.nn.functional import normalize
def mean_pooling(model_output, attention_mask):
""" Perform mean pooling on-top of the contextualized word embeddings, while ignoring mask tokens in the mean computation."""
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
queries = ["Ceci est un exemple de requรชte.", "Voici un second exemple."]
passages = ["Ceci est un exemple de passage.", "Et voilร un deuxiรจme exemple."]
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR')
model = AutoModel.from_pretrained('antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR')
q_input = tokenizer(queries, padding=True, truncation=True, return_tensors='pt')
p_input = tokenizer(passages, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
q_output = model(**encoded_queries)
p_output = model(**encoded_passages)
q_embeddings = mean_pooling(q_output, q_input['attention_mask'])
q_embedddings = normalize(q_embeddings, p=2, dim=1)
p_embeddings = mean_pooling(p_output, p_input['attention_mask'])
p_embedddings = normalize(p_embeddings, p=2, dim=1)
similarity = q_embeddings @ p_embeddings.T
print(similarity)
```
## Evaluation
The model is evaluated on the smaller development set of [mMARCO-fr](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/), which consists of 6,980 queries for a corpus of
8.8M candidate passages. We report the mean reciprocal rank (MRR), normalized discounted cumulative gainand (NDCG), mean average precision (MAP), and recall at various cut-offs (R@k).
To see how it compares to other neural retrievers in French, check out the [*DรฉcouvrIR*](https://huggingface.co/spaces/antoinelouis/decouvrir) leaderboard.
## Training
#### Data
We use the French training samples from the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset, a multilingual machine-translated version of MS MARCO that contains 8.8M passages and 539K training queries. We do not employ the BM25 netaives provided by the official dataset but instead sample harder negatives mined from 12 distinct dense retrievers, using the [msmarco-hard-negatives](https://huggingface.co/datasets/sentence-transformers/msmarco-hard-negatives) distillation dataset.
#### Implementation
The model is initialized from the [nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large](https://huggingface.co/nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large) checkpoint and optimized via the cross-entropy loss (as in [DPR](https://doi.org/10.48550/arXiv.2004.04906)) with a temperature of 0.05. It is fine-tuned on one 32GB NVIDIA V100 GPU for 20 epochs (i.e., 65.7k steps) using the AdamW optimizer with a batch size of 152, a peak learning rate of 2e-5 with warm up along the first 500 steps and linear scheduling. We set the maximum sequence lengths for both the questions and passages to 128 tokens. We use the cosine similarity to compute relevance scores.
## Citation
```bibtex
@online{louis2024decouvrir,
author = 'Antoine Louis',
title = 'DรฉcouvrIR: A Benchmark for Evaluating the Robustness of Information Retrieval Models in French',
publisher = 'Hugging Face',
month = 'mar',
year = '2024',
url = 'https://huggingface.co/spaces/antoinelouis/decouvrir',
}
``` |
ChrisWilson011016/5CUZD9qWEFDwC5vPfe1FYa2WYac8vidtF3X2NvBiSRUSoqCT_vgg | ChrisWilson011016 | "2024-03-04T18:57:35Z" | 887 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-02-24T15:24:46Z" | Entry not found |
breezedeus/pix2text-table-rec | breezedeus | "2024-06-17T09:48:05Z" | 887 | 1 | transformers | [
"transformers",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:2303.00716",
"license:mit",
"endpoints_compatible",
"region:us"
] | object-detection | "2024-03-29T01:49:03Z" | ---
license: mit
---
# Table Structure Recognition Model for [Pix2Text (P2T)](https://github.com/breezedeus/Pix2Text)
Currently, this model is forked from https://huggingface.co/microsoft/table-transformer-structure-recognition-v1.1-all (much thanks to the authors), and will be evolving afterward.
## Documents for Pix2Text
- [Pix2Text V1.0 New Release: The Best Open-Source Formula Recognition Model | Breezedeus.com](https://www.breezedeus.com/article/p2t-v1.0) ;
- Pix2Text (P2T) Github: [breezedeus/pix2text](https://github.com/breezedeus/Pix2Text) ;
- Pix2Text Online Free Service: [p2t.breezedeus.com](https://p2t.breezedeus.com/) ;
- Pix2Text More: [breezedeus.com/pix2text](https://breezedeus.com/article/pix2text) ;
---
Table Transformer (TATR) model trained on PubTables1M and FinTabNet.c. It was introduced in the paper [Aligning benchmark datasets for table structure recognition](https://arxiv.org/abs/2303.00716) by Smock et al. and first released in [this repository](https://github.com/microsoft/table-transformer).
Disclaimer: The team releasing Table Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Table Transformer is equivalent to [DETR](https://huggingface.co/docs/transformers/model_doc/detr), a Transformer-based object detection model. Note that the authors decided to use the "normalize before" setting of DETR, which means that layernorm is applied before self- and cross-attention.
## Usage
You can use the raw model for detecting tables in documents. See the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/table-transformer) for more info. |
Sehong/kobart-QuestionGeneration | Sehong | "2022-05-28T03:21:39Z" | 886 | 5 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"ko",
"dataset:korquad",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-05-26T01:02:05Z" | ---
language: ko
tags:
- bart
datasets:
- korquad
license: mit
---
# Korean Question Generation Model
## Github
https://github.com/Seoneun/KoBART-Question-Generation
## Fine-tuning Dataset
KorQuAD 1.0
## Demo
https://huggingface.co/Sehong/kobart-QuestionGeneration
## How to use
```python
import torch
from transformers import PreTrainedTokenizerFast
from transformers import BartForConditionalGeneration
tokenizer = PreTrainedTokenizerFast.from_pretrained('Sehong/kobart-QuestionGeneration')
model = BartForConditionalGeneration.from_pretrained('Sehong/kobart-QuestionGeneration')
text = "1989๋
2์ 15์ผ ์ฌ์๋ ๋๋ฏผ ํญ๋ ฅ ์์๋ฅผ ์ฃผ๋ํ ํ์(ํญ๋ ฅํ์๋ฑ์ฒ๋ฒ์๊ดํ๋ฒ๋ฅ ์๋ฐ)์ผ๋ก ์ง๋ช
์๋ฐฐ๋์๋ค. 1989๋
3์ 12์ผ ์์ธ์ง๋ฐฉ๊ฒ์ฐฐ์ฒญ ๊ณต์๋ถ๋ ์์ข
์์ ์ฌ์ ๊ตฌ์์์ฅ์ ๋ฐ๋ถ๋ฐ์๋ค. ๊ฐ์ ํด 6์ 30์ผ ํ์์ถ์ ์ ์์๊ฒฝ์ ๋ํ๋ก ํ๊ฒฌํ์ฌ ๊ตญ๊ฐ๋ณด์๋ฒ์๋ฐ ํ์๊ฐ ์ถ๊ฐ๋์๋ค. ๊ฒฝ์ฐฐ์ 12์ 18์ผ~20์ผ ์ฌ์ด ์์ธ ๊ฒฝํฌ๋ํ๊ต์์ ์์ข
์์ด ์ฑ๋ช
๋ฐํ๋ฅผ ์ถ์งํ๊ณ ์๋ค๋ ์ฒฉ๋ณด๋ฅผ ์
์ํ๊ณ , 12์ 18์ผ ์ค์ 7์ 40๋ถ ๊ฒฝ ๊ฐ์ค์ด๊ณผ ์ ์๋ด์ผ๋ก ๋ฌด์ฅํ ํน๊ณต์กฐ ๋ฐ ๋๊ณต๊ณผ ์ง์ 12๋ช
๋ฑ 22๋ช
์ ์ฌ๋ณต ๊ฒฝ์ฐฐ์ ์น์ฉ์ฐจ 8๋์ ๋๋์ด ๊ฒฝํฌ๋ํ๊ต์ ํฌ์
ํ๋ค. 1989๋
12์ 18์ผ ์ค์ 8์ 15๋ถ ๊ฒฝ ์์ธ์ฒญ๋๋ฆฌ๊ฒฝ์ฐฐ์๋ ํธ์ ํ์ 5๋ช
๊ณผ ํจ๊ป ๊ฒฝํฌ๋ํ๊ต ํ์ํ๊ด ๊ฑด๋ฌผ ๊ณ๋จ์ ๋ด๋ ค์ค๋ ์์ข
์์ ๋ฐ๊ฒฌ, ๊ฒ๊ฑฐํด ๊ตฌ์์ ์งํํ๋ค. ์์ข
์์ ์ฒญ๋๋ฆฌ๊ฒฝ์ฐฐ์์์ ์ฝ 1์๊ฐ ๋์ ์กฐ์ฌ๋ฅผ ๋ฐ์ ๋ค ์ค์ 9์ 50๋ถ ๊ฒฝ ์์ธ ์ฅ์๋์ ์์ธ์ง๋ฐฉ๊ฒฝ์ฐฐ์ฒญ ๊ณต์๋ถ์ค๋ก ์ธ๊ณ๋์๋ค. <unused0> 1989๋
2์ 15์ผ"
raw_input_ids = tokenizer.encode(text)
input_ids = [tokenizer.bos_token_id] + raw_input_ids + [tokenizer.eos_token_id]
summary_ids = model.generate(torch.tensor([input_ids]))
print(tokenizer.decode(summary_ids.squeeze().tolist(), skip_special_tokens=True))
# <unused0> is sep_token, sep_token seperate content and answer
```
|
SG161222/RealVisXL_V1.0 | SG161222 | "2024-04-12T15:38:41Z" | 886 | 31 | diffusers | [
"diffusers",
"safetensors",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-09-03T11:49:27Z" | ---
license: openrail++
---
<b>This model is available on <a href="https://www.mage.space/">Mage.Space</a> (main sponsor)</b><br>
<b>You can support me directly on Boosty - https://boosty.to/sg_161222</b><br>
<b>It's important! Read it!</b><br>
The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.<br>
The model is aimed at photorealism. Can produce sfw and nsfw images of decent quality.<br>
CivitAI Page: https://civitai.com/models/139562?modelVersionId=154590<br>
<b>Recommended Negative Prompt:</b><br>
(worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth<br>
<b>or another negative prompt</b><br>
<b>Recommended Generation Parameters:</b><br>
Sampling Steps: 15-30<br>
Sampling Method: DPM++ SDE Karras<br>
<b>Recommended Hires Fix Parameters:</b><br>
Hires steps: 10+<br>
Upscaler: 4x-UltraSharp upscaler / or another<br>
Denoising strength: 0.1 - 0.5<br>
Upscale by: 1.1-2.0<br> |
TheBloke/Llama-2-70B-OASST-1-200-GGUF | TheBloke | "2023-09-27T12:48:20Z" | 886 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"sft",
"en",
"de",
"es",
"fr",
"dataset:OpenAssistant/oasst1",
"base_model:jordiclive/Llama-2-70b-oasst-1-200",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | "2023-09-06T19:39:10Z" | ---
language:
- en
- de
- es
- fr
license: apache-2.0
tags:
- sft
datasets:
- OpenAssistant/oasst1
model_name: Open-Assistant Llama2 70B SFT OASST
base_model: jordiclive/Llama-2-70b-oasst-1-200
inference: false
model_creator: Jordan Clive
model_type: llama
prompt_template: '<|prompter|>{prompt}<|endoftext|><|assistant|>
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Open-Assistant Llama2 70B SFT OASST - GGUF
- Model creator: [Jordan Clive](https://huggingface.co/jordiclive)
- Original model: [Open-Assistant Llama2 70B SFT OASST](https://huggingface.co/jordiclive/Llama-2-70b-oasst-1-200)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jordan Clive's Open-Assistant Llama2 70B SFT OASST](https://huggingface.co/jordiclive/Llama-2-70b-oasst-1-200).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-GGUF)
* [Jordan Clive's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jordiclive/Llama-2-70b-oasst-1-200)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: OpenAssistant
```
<|prompter|>{prompt}<|endoftext|><|assistant|>
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `apache-2.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Jordan Clive's Open-Assistant Llama2 70B SFT OASST](https://huggingface.co/jordiclive/Llama-2-70b-oasst-1-200).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-2-70b-oasst-1-200.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-GGUF/blob/main/llama-2-70b-oasst-1-200.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-70b-oasst-1-200.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-GGUF/blob/main/llama-2-70b-oasst-1-200.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [llama-2-70b-oasst-1-200.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-GGUF/blob/main/llama-2-70b-oasst-1-200.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [llama-2-70b-oasst-1-200.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-GGUF/blob/main/llama-2-70b-oasst-1-200.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [llama-2-70b-oasst-1-200.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-GGUF/blob/main/llama-2-70b-oasst-1-200.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-70b-oasst-1-200.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-GGUF/blob/main/llama-2-70b-oasst-1-200.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [llama-2-70b-oasst-1-200.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-GGUF/blob/main/llama-2-70b-oasst-1-200.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [llama-2-70b-oasst-1-200.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-GGUF/blob/main/llama-2-70b-oasst-1-200.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-70b-oasst-1-200.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-GGUF/blob/main/llama-2-70b-oasst-1-200.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [llama-2-70b-oasst-1-200.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-OASST-1-200-GGUF/blob/main/llama-2-70b-oasst-1-200.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| llama-2-70b-oasst-1-200.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| llama-2-70b-oasst-1-200.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `llama-2-70b-oasst-1-200.Q6_K.gguf-split-a`
* `llama-2-70b-oasst-1-200.Q6_K.gguf-split-b`
### q8_0
Please download:
* `llama-2-70b-oasst-1-200.Q8_0.gguf-split-a`
* `llama-2-70b-oasst-1-200.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat llama-2-70b-oasst-1-200.Q6_K.gguf-split-* > llama-2-70b-oasst-1-200.Q6_K.gguf && rm llama-2-70b-oasst-1-200.Q6_K.gguf-split-*
cat llama-2-70b-oasst-1-200.Q8_0.gguf-split-* > llama-2-70b-oasst-1-200.Q8_0.gguf && rm llama-2-70b-oasst-1-200.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B llama-2-70b-oasst-1-200.Q6_K.gguf-split-a + llama-2-70b-oasst-1-200.Q6_K.gguf-split-b llama-2-70b-oasst-1-200.Q6_K.gguf
del llama-2-70b-oasst-1-200.Q6_K.gguf-split-a llama-2-70b-oasst-1-200.Q6_K.gguf-split-b
COPY /B llama-2-70b-oasst-1-200.Q8_0.gguf-split-a + llama-2-70b-oasst-1-200.Q8_0.gguf-split-b llama-2-70b-oasst-1-200.Q8_0.gguf
del llama-2-70b-oasst-1-200.Q8_0.gguf-split-a llama-2-70b-oasst-1-200.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Llama-2-70B-OASST-1-200-GGUF and below it, a specific filename to download, such as: llama-2-70b-oasst-1-200.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Llama-2-70B-OASST-1-200-GGUF llama-2-70b-oasst-1-200.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Llama-2-70B-OASST-1-200-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-70B-OASST-1-200-GGUF llama-2-70b-oasst-1-200.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-2-70b-oasst-1-200.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|prompter|>{prompt}<|endoftext|><|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-70B-OASST-1-200-GGUF", model_file="llama-2-70b-oasst-1-200.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jordan Clive's Open-Assistant Llama2 70B SFT OASST
# Open-Assistant Llama2 70B SFT OASST
This model is a fine-tuning of [Llama2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf) LLM.
It was trained on a mixture of OASST top-1 threads.
## Model Details
- **Finetuned from:** [Llama2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **License:** Apache 2.0
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `</s>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?</s><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
<!-- original-model-card end -->
|
mradermacher/Beyonder-4x7B-v3-i1-GGUF | mradermacher | "2024-05-06T05:20:41Z" | 886 | 2 | transformers | [
"transformers",
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:mlabonne/Beyonder-4x7B-v3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-03T13:59:30Z" | ---
base_model: mlabonne/Beyonder-4x7B-v3
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/mlabonne/Beyonder-4x7B-v3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Beyonder-4x7B-v3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-IQ1_S.gguf) | i1-IQ1_S | 5.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-IQ1_M.gguf) | i1-IQ1_M | 5.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-IQ2_M.gguf) | i1-IQ2_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-Q2_K.gguf) | i1-Q2_K | 9.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-IQ3_S.gguf) | i1-IQ3_S | 10.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-IQ3_M.gguf) | i1-IQ3_M | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-Q4_0.gguf) | i1-Q4_0 | 13.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Beyonder-4x7B-v3-i1-GGUF/resolve/main/Beyonder-4x7B-v3.i1-Q6_K.gguf) | i1-Q6_K | 20.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
norallm/normistral-7b-warm-instruct | norallm | "2024-06-17T16:54:20Z" | 886 | 21 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"mistral",
"text-generation",
"norwegian",
"instruction",
"chat",
"conversational",
"no",
"nb",
"nn",
"dataset:CohereForAI/aya_dataset",
"dataset:OpenAssistant/oasst1",
"dataset:OpenAssistant/oasst2",
"dataset:laion/OIG",
"dataset:HuggingFaceH4/no_robots",
"dataset:databricks/databricks-dolly-15k",
"dataset:glaiveai/glaive-code-assistant-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-05T20:29:52Z" | ---
language:
- 'no'
- nb
- nn
inference: true
tags:
- mistral
- norwegian
- instruction
- chat
license: apache-2.0
pipeline_tag: text-generation
datasets:
- CohereForAI/aya_dataset
- OpenAssistant/oasst1
- OpenAssistant/oasst2
- laion/OIG
- HuggingFaceH4/no_robots
- databricks/databricks-dolly-15k
- glaiveai/glaive-code-assistant-v2
---
# **Instruction-tuned NorMistral-7b-warm**
<img align="center" src="https://huggingface.co/ltg/norbert3-base/resolve/main/norbert.png" width=12.5%>
This is a model instruction-tuned on open datasets released under the most permissive apache-2.0 licence (in other words, we don't use any datasets generated by ChatGPT) โ thus we can release this model under the same license and make it openly available for commercial applications. The model has been finetuned on 4096 context length, twice as many tokens as the base model.
The released weights are still a work in progress and they might change in the future. This is the first iteration of instruction-tuning our NorMistral models and it currently uses only the SFT phase without any preference optimization. Please let us know your feedback to improve the model in future releases.
## Finetuning corpus
The corpus was compiled by this process:
1. We gathered all openly available datasets: [Aya](https://huggingface.co/datasets/CohereForAI/aya_dataset), [OASST 1](https://huggingface.co/datasets/OpenAssistant/oasst1), [OASST 2](https://huggingface.co/datasets/OpenAssistant/oasst2), [OIG-small-chip2](https://huggingface.co/datasets/laion/OIG), [No Robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots), [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and [Glaive code assistant](Glaive-code-assistant-v2).
2. These were first manually inspected and filtered, and then automatically filtered with [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) to remove incorrect, offensive, non-English and American-centric responses.
3. The responses were augmented to be more descriptive by [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
4. Since most of that dataset contains only a single dialogue turn, we generated more turns using [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).
5. Finally, we translated the resulting dataset into Bokmรฅl and Nynorsk using [NorMistral-7b-warm](https://huggingface.co/norallm/normistral-7b-warm).
## How to run the model?
### 1. Prompt format
NorMistral uses ChatML-like format for structuring the (multi-turn) conversations. An example of a prompt in this format looks like the following (notice the special `<|im_start|>` and `<|im_end|>` tokens).
```
<|im_start|> user
Hva er hovedstaden i Norge?<|im_end|>
<|im_start|> assistant
Hovedstaden i Norge er Oslo. Denne byen ligger i den sรธrรธstlige delen av landet, ved Oslofjorden. Oslo er en av de raskest voksende byene i Europa, og den er kjent for sin rike historie, kultur og moderne arkitektur. Noen populรฆre turistattraksjoner i Oslo inkluderer Vigelandsparken, som viser mer enn 200 skulpturer laget av den berรธmte norske skulptรธren Gustav Vigeland, og det kongelige slott, som er den offisielle residensen til Norges kongefamilie. Oslo er ogsรฅ hjemsted for mange museer, gallerier og teatre, samt mange restauranter og barer som tilbyr et bredt utvalg av kulinariske og kulturelle opplevelser.<|im_end|>
<|im_start|> user
Gi meg en liste over de beste stedene รฅ besรธke i hovedstaden<|im_end|>
<|im_start|> assistant
```
This prompt-format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) in the NorMistral's tokens, so it can be easily applied by calling the
`tokenizer.apply_chat_template()` method:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("norallm/normistral-7b-warm-instruct")
messages = [
{"role": "user", "content": "Hva er hovedstaden i Norge?"},
{"role": "assistant", "content": "Hovedstaden i Norge er Oslo. Denne byen ligger i den sรธrรธstlige delen av landet, ved Oslofjorden. Oslo er en av de raskest voksende byene i Europa, og den er kjent for sin rike historie, kultur og moderne arkitektur. Noen populรฆre turistattraksjoner i Oslo inkluderer Vigelandsparken, som viser mer enn 200 skulpturer laget av den berรธmte norske skulptรธren Gustav Vigeland, og det kongelige slott, som er den offisielle residensen til Norges kongefamilie. Oslo er ogsรฅ hjemsted for mange museer, gallerier og teatre, samt mange restauranter og barer som tilbyr et bredt utvalg av kulinariske og kulturelle opplevelser."},
{"role": "user", "content": "Gi meg en liste over de beste stedene รฅ besรธke i hovedstaden"}
]
gen_input = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
### 2. Generation parameters
The model is quite sensitive to generation parameters, it's important to set them correctly. We give an example of a reasonable generation setting below. Note that other libraries have different defaults and that it's important to check them.
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("norallm/normistral-7b-warm-instruct", torch_dtype=torch.bfloat16)
model.generate(
gen_input,
max_new_tokens=1024,
top_k=64, # top-k sampling
top_p=0.9, # nucleus sampling
temperature=0.3, # a low temparature to make the outputs less chaotic
repetition_penalty=1.0, # turn the repetition penalty off, having it on can lead to very bad outputs
do_sample=True, # randomly sample the outputs
use_cache=True # speed-up generation
)
```
## About the base model
NorMistral-7b-warm is a large Norwegian language model initialized from [Mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and
continuously pretrained on a total of 260 billion subword tokens (using six repetitions of open Norwegian texts).
This model is a part of the NORA.LLM family developed in collaboration between [the Language Technology Group at the University of Oslo](https://huggingface.co/ltg), [the High Performance Language Technologies (HPLT) project](https://hplt-project.org/), [the National Library of Norway](https://huggingface.co/NbAiLab), and [the University of Turku](https://huggingface.co/TurkuNLP).
All the models are pre-trained on the same dataset and with the same tokenizer.
NorMistral-7b-warm has over 7 billion parameters and is based on [the Mistral architecture](https://huggingface.co/mistralai/Mistral-7B-v0.1).
The NORA.LLM language model family includes (as of now):
- [**NorMistral-7b-warm**](https://huggingface.co/norallm/normistral-7b-warm) -- an LLM initialized from [Mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and continuously pretrained on Norwegian data;
- [**NorMistral-7b-scratch**](https://huggingface.co/norallm/normistral-7b-scratch) -- a Mistral-based LLM pretrained from scratch on Norwegian data;
- [**NorBLOOM-7b-scratch**](https://huggingface.co/norallm/NorBLOOM-7b-scratch) -- a BLOOM-based LLM pretrained from scratch on Norwegian data.
_____
## Quantization
### Provided files
| Name | Quant method | Bits Per Weight | Size | Max RAM/VRAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [normistral-7b-warm-instruct.Q3_K_M.gguf](https://huggingface.co/norallm/normistral-7b-warm/blob/main/normistral-7b-warm-instruct.Q3_K_M.gguf) | Q3_K_M | 3.89 | 3.28 GB| 5.37 GB | very small, high loss of quality |
| [normistral-7b-warm-instruct.Q4_K_M.gguf](https://huggingface.co/norallm/normistral-7b-warm/blob/main/normistral-7b-warm-instruct.Q4_K_M.gguf) | Q4_K_M | 4.83 | 4.07 GB| 6.16 GB | medium, balanced quality |
| [normistral-7b-warm-instruct.Q5_K_M.gguf](https://huggingface.co/norallm/normistral-7b-warm/blob/main/normistral-7b-warm-instruct.Q5_K_M.gguf) | Q5_K_M | 5.67 | 4.78 GB| 6.87 GB | large, very low quality loss |
| [normistral-7b-warm-instruct.Q6_K.gguf](https://huggingface.co/norallm/normistral-7b-warm/blob/main/normistral-7b-warm-instruct.Q6_K.gguf) | Q6_K | 6.56 | 5.54 GB| 7.63 GB | very large, extremely low quality loss |
| [normistral-7b-warm-instruct.Q8_0.gguf](https://huggingface.co/norallm/normistral-7b-warm/blob/main/normistral-7b-warm-instruct.Q8_0.gguf) | Q8_0 | 8.50 | 7.17 GB| 9.26 GB | very large, extremely low quality loss |
### How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) for example.
#### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://llama-cpp-python.readthedocs.io/en/latest/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base llama-ccp-python with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Directly from huggingface-hub (requires huggingface-hub to be installed)
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama.from_pretrained(
repo_id="norallm/normistral-7b-warm-instruct", # HuggingFace repository containing the GGUF files.
filename="*Q4_K_M.gguf", # suffix of the filename containing the level of quantization.
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"""<s><|im_start|> user
Hva kan jeg bruke einstape til?<|im_end|>
<|im_start|> assistant
""", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["<|im_end|>"], # Example stop token
echo=True, # Whether to echo the prompt
temperature=0.3 # Temperature to set, for Q3_K_M, Q4_K_M, Q5_K_M, and Q6_0 it is recommended to set it relatively low.
)
# Chat Completion API
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "Hva kan jeg bruke einstape til?"
}
]
)
``` |
cognitivecomputations/dolphin-2.9.2-Phi-3-Medium | cognitivecomputations | "2024-06-02T23:41:33Z" | 886 | 11 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:cognitivecomputations/Dolphin-2.9.2",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:internlm/Agent-FLAN",
"dataset:cognitivecomputations/SystemChat-2.0",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-31T03:38:23Z" | ---
license: mit
language:
- en
base_model:
- unsloth/Phi-3-mini-4k-instruct
datasets:
- cognitivecomputations/Dolphin-2.9.2
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- internlm/Agent-FLAN
- cognitivecomputations/SystemChat-2.0
---
# Dolphin 2.9.2 Phi 3 Medium ๐ฌ
Curated and trained by Eric Hartford, Lucas Atkins, Fernando Fernandes, and with help from the community of Cognitive Computations
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
Our appreciation for the sponsor of Dolphin 2.9.2:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40Snode
This model is based on Phi-3-Medium-Instruct-4k, and is governed by the MIT license with which Microsoft released Phi-3.
Since Microsoft only released the fine-tuned model - Dolphin-2.9.2-Phi-3-Medium has not been entirely cleaned of refusals.
The base model has 4k context, and the qLoRA fine-tuning was with 4k sequence length.
It took 3.5 days on 8xL40S node provided by Crusoe Cloud
This model uses the ChatML prompt template.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9.2 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## evals:
<img src="https://i.ibb.co/jrBsPLY/file-9gw-A1-Ih-SBYU3-PCZ92-ZNb-Vci-P.png" width="600" /> |
YituTech/conv-bert-medium-small | YituTech | "2021-02-24T11:24:27Z" | 885 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"convbert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-03-02T23:29:05Z" | Entry not found |
Kodiks/news-category-classification-turkish | Kodiks | "2022-12-23T09:41:51Z" | 885 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"text",
"news-category-classification",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-12-23T08:30:46Z" | ---
language:
- tr
tags:
- text # Example: audio
- text-classification # Example: automatic-speech-recognition
- news-category-classification # Example: speech
metrics:
- accuracy # Example: wer. Use metric id from https://hf.co/metrics
- f1
- precision
- recall
---
## INTERPRESS TURKISH NEWS CATEGORY CLASSIFICATION MODEL - TEST - v0.2
LABELS = {
0: 'spor',
1: 'is_ve_finans',
2: 'lifestyle',
3: 'eglence',
4: 'seyahat',
5: 'egitim',
6: 'bilim',
7: 'teknoloji',
8: 'kultur_sanat',
9: 'otomotiv',
10: 'politika',
11: 'endustri',
12: 'moda',
13: 'yemek',
14: 'saglik'
}
ACC = 0.9190, F1 = 0.7590, PRECISION = 0.7966, RECALL = 0.7385
### DATASETS
```
DatasetDict({
train: Dataset({
features: ['labels', 'content'],
num_rows: 112705
})
test: Dataset({
features: ['labels', 'content'],
num_rows: 28177
})
})
```
### DATASETS SAMPLES
```
{
"label": "eglence",
"content": "BฤฐR ลARKI ฤฐKฤฐ VERSฤฐYON Sevilen ลarkฤฑcฤฑ Okan Tok, benim miladฤฑm dediฤi 7. Teklisi โTopuk Sesleriโ ile mรผzikseverlerle yeniden buluลtu. Okan Tok yorumuyla โTopuk Sesleriโ isimli eserin sรถz ve mรผziฤi Meltem Kurtoฤlu imzasฤฑ taลฤฑrken Akฤฑn ฤฐshakoฤlu yรถnetmenliฤinde klip รงalฤฑลmasฤฑ tamamlandฤฑ. Klip รงalฤฑลmasฤฑnda sevdiฤi erkek iรงin box ringinde dรถvรผลen iki kadฤฑn konu edilirken Okan Tok โ En รงok keyif aldฤฑฤฤฑm klip oldu iki kadฤฑnฤฑn benim iรงin dรถvรผลtรผฤรผnรผ izlemek egomu okลadฤฑ ve inanฤฑlmaz duygular yaลadฤฑm halen her gรผn 2 kez izliyorum. Sanฤฑrฤฑm bir erkeฤi mutlu etmenin yolu onun iรงin mรผcadele etmekten geรงiyor, bunu klip รงekimlerimde hissettimโ dedi. รnรผmรผzdeki gรผnlerde ลarkฤฑsฤฑnฤฑn remix versiyonunu da sevenlerine sunacaฤฤฑnฤฑ ifade eden Okan Tok, ilk mรผzikteki hedefim รงok iyi bir solist olmaktฤฑ ve bunu baลardฤฑm. Yฤฑlmadan รงalฤฑลtฤฑm รผrettim, ลimdi bambaลka bir Okan olarak herkesin karลฤฑsฤฑndayฤฑm, hayalim Ebru Gรผndeล ile dรผet yapmak, vurgusunu yaptฤฑ. OKAN TOK YORUMUYLA โTOPUK SESLERฤฐโ KLฤฐP LฤฐNK"
},
{
"label": "is_ve_finans",
"content": "THY\'nin eski genel mรผdรผrรผ ve pilotu Atilla รelebi hayatฤฑnฤฑ kaybetti Tรผrk Hava Yollarฤฑnda (THY) 1994-1997 yฤฑllarฤฑ arasฤฑnda genel mรผdรผrlรผk gรถrevini yรผrรผten Kaptan Pilot Atilla รelebi, 92 yaลฤฑnda vefat etti. Pilot Atilla รelebiโnin vefatฤฑnฤฑ sosyal medya hesabฤฑndan duyuran THY Genel Mรผdรผrรผ Bilal Ekลi, "Tรผrk Hava Yollarฤฑ eski genel mรผdรผrlerimizden Kaptan Pilot Atilla รelebi\'nin vefat ettiฤini รผzรผntรผyle รถฤrenmiล bulunuyorum. Merhuma Allahโtan rahmet, kฤฑymetli ailesine ve yakฤฑnlarฤฑna baลsaฤlฤฑฤฤฑ dilerim. THY ailemizin baลฤฑ saฤ olsun." ifadelerini kullandฤฑ."
},
{
"label": "teknoloji",
"content": "KVVK: AstraZeneca Tรผrkiye\'de veri sฤฑzฤฑntฤฑsฤฑ yaลandฤฑ Kiลisel Verileri Koruma Kurumu (KVKK), AstraZenecaโda veri ihlali yaลandฤฑฤฤฑnฤฑ duyurdu. KVKKโnin aรงฤฑklamasฤฑna gรถre AstraZenecaโnฤฑn web sitesinde tarayฤฑcฤฑnฤฑn โkaynaฤฤฑ gรถrรผntรผleโ รถzelliฤini kullanarak adaylarฤฑn kiลisel bilgileri gรถrรผlebiliyordu. ฤฐล baลvurusu yapan 981 kiลinin kiลisel verileri bu yรถntemle herkes tarafฤฑndan gรถrรผntรผlenebildi.AstraZeneca Tรผrkiyeโye iล baลvurusu yapan kiลilerin รผlke, isim, e-posta, telefon numarasฤฑ, maaล beklentisi, mevcut maaล bilgisi, var ise โAstraZenecaโ ile รถnceki iล iliลkisi bilgisi, vize durumu, mevcut veya รถnceki iลveren ile ilgili kฤฑsฤฑtlayฤฑcฤฑ maddelerin ayrฤฑntฤฑlarฤฑ sฤฑzdฤฑ.KVKK\'den aรงฤฑklama"Bilindiฤi รผzere, 6698 sayฤฑlฤฑ Kiลisel Verilerin Korunmasฤฑ Kanununun โVeri gรผvenliฤine iliลkin yรผkรผmlรผlรผklerโ baลlฤฑklฤฑ 12 nci maddesinin (5) numaralฤฑ fฤฑkrasฤฑ โฤฐลlenen kiลisel verilerin kanuni olmayan yollarla baลkalarฤฑ tarafฤฑndan elde edilmesi hรขlinde, veri sorumlusu bu durumu en kฤฑsa sรผrede ilgilisine ve Kurula bildirir. Kurul, gerekmesi hรขlinde bu durumu, kendi internet sitesinde ya da uygun gรถreceฤi baลka bir yรถntemle ilan edebilir.โ hรผkmรผnรผ amirdir.Veri sorumlusu sฤฑfatฤฑnฤฑ haiz AstraZeneca ฤฐlaรง Sanayi ve Ticaret Limited ลirketi tarafฤฑndan Kurula iletilen veri ihlal bildiriminde รถzetle;รalฤฑลan adaylarฤฑnฤฑn, โAstraZenecaโdaki aรงฤฑk pozisyonlara baลvurabilmelerini saฤlayan, veri iลleyen (Workday Limited) sisteminde ihlal gerรงekleลtiฤi,Bir adayฤฑn kendi hesabฤฑna giriล yapmadan iล baลvurusu gรถnderebilmesi iรงin Workdayโin, kullanฤฑcฤฑ oturumuna iliลkin verileri izlemek adฤฑna bir JavaScript deฤiลkeni kullandฤฑฤฤฑ, bu deฤiลkenin HTML kaynaฤฤฑna dahil edildiฤi, deฤiลkenin deฤerinin, harici kariyer sitesi iรงin HTML kaynaฤฤฑnฤฑ inceleyen, รถrneฤin tarayฤฑcฤฑnฤฑn "Kaynaฤฤฑ Gรถrรผntรผle" รถzelliฤini kullanan kullanฤฑcฤฑlar tarafฤฑndan gรถrรผlebilir hale geldiฤi,Bahse konu durumdan dolayฤฑ, 13 Temmuz 2022 saat 23:53 (ฤฐstanbul saati) ila 14 Temmuz 2022 saat 05:32 arasฤฑnda ve/veya 20 Temmuz 2022 saat 22:06 ila 1 Aฤustos 2022 saat 23:15 arasฤฑnda iล baลvurusu yapan รงalฤฑลan adaylarฤฑnฤฑn kiลisel verilerinin kฤฑsa sรผreliฤine eriลilebilir hale geldiฤi,ฤฐhlalin 31 Temmuz 2022 tarihinde tespit edildiฤi,ฤฐhlalden etkilenen kiลi grubunun รงalฤฑลan adaylarฤฑ olduฤu,ฤฐhlalden tahmini 981 kiลinin etkilendiฤi,ฤฐhlalden etkilenen kiลisel verilerin; รผlke, isim, e-posta, telefon numarasฤฑ, maaล beklentisi, mevcut maaล bilgisi, var ise โAstraZenecaโ ile รถnceki iล iliลkisi bilgisi, vize durumu, mevcut veya รถnceki iลveren ile ilgili kฤฑsฤฑtlayฤฑcฤฑ maddelerin ayrฤฑntฤฑlarฤฑ olduฤunun tahmin edildiฤi, buna ek olarak, รงalฤฑลan adaylarฤฑnฤฑn veri iลleyen sistemi รผzerinden gรถnรผllรผ olarak da kiลisel URL, iล deneyimi, eฤitim, dil, yetenekler ve รถzgeรงmiล verilerini saฤlayabildiฤibilgilerine yer verilmiลtir.Konuya iliลkin inceleme devam etmekle birlikte, Kiลisel Verileri Koruma Kurulunun 11.08.2022 tarih ve 2022/831 sayฤฑlฤฑ Kararฤฑ ile sรถz konusu veri ihlali bildiriminin Kurumun internet sayfasฤฑnda ilan edilmesine karar verilmiลtir."Redmi Note 11 ailesi hฤฑzlฤฑ ลarj konusunda sฤฑnฤฑrlarฤฑ zorluyorBu videoda konumuz hฤฑzlฤฑ ลarj teknolojileri. Xiaomiโnin bรผyรผk ilgi gรถren Redmi Note 11 serisinin de en รงarpฤฑcฤฑ รถzelliฤi hฤฑzlฤฑ ลarj. Peki uygun fiyata saฤlam รถzellikler sunan Redmi Note 11 ailesi รถzellikle hฤฑzlฤฑ ลarj konusunda ne kadar iddialฤฑ?daha fazla video iรงin"
},
``` |
ciasimbaya/ObjectDetection | ciasimbaya | "2023-08-14T16:45:28Z" | 885 | 9 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"endpoints_compatible",
"region:us"
] | object-detection | "2023-08-14T16:42:45Z" | Entry not found |
khalidsaifullaah/bengali-lyricist-gpt2 | khalidsaifullaah | "2021-10-05T20:01:50Z" | 884 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"text generation",
"bengali",
"bangla",
"causal-lm",
"bn",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: bn
tags:
- text generation
- bengali
- gpt2
- bangla
- causal-lm
widget:
- text: "เฆเงเฆฌเฆจเงเฆฐ เฆฎเฆพเฆจเง "
pipeline_tag: text-generation
---
<!--
---
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: bengali-lyricist-gpt2
results:
- task:
name: Causal Language Modeling
type: text-generation
---
-->
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bengali-lyricist-gpt2
This model is a fine-tuned version of [flax-community/gpt2-bengali](https://huggingface.co/flax-community/gpt2-bengali) on the [Bengali Song Lyrics](https://www.kaggle.com/shakirulhasan/bangla-song-lyrics) dataset from Kaggle.
It achieves the following results on the evaluation set:
- Loss: 2.1199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 284 | 2.0302 |
| 1.9991 | 2.0 | 568 | 2.0079 |
| 1.9991 | 3.0 | 852 | 1.9956 |
| 1.9135 | 4.0 | 1136 | 1.9885 |
| 1.9135 | 5.0 | 1420 | 1.9840 |
| 1.8561 | 6.0 | 1704 | 1.9831 |
| 1.8561 | 7.0 | 1988 | 1.9828 |
| 1.8094 | 8.0 | 2272 | 1.9827 |
| 1.7663 | 9.0 | 2556 | 1.9868 |
| 1.7663 | 10.0 | 2840 | 1.9902 |
| 1.7279 | 11.0 | 3124 | 1.9961 |
| 1.7279 | 12.0 | 3408 | 2.0023 |
| 1.6887 | 13.0 | 3692 | 2.0092 |
| 1.6887 | 14.0 | 3976 | 2.0162 |
| 1.6546 | 15.0 | 4260 | 2.0225 |
| 1.6217 | 16.0 | 4544 | 2.0315 |
| 1.6217 | 17.0 | 4828 | 2.0410 |
| 1.5953 | 18.0 | 5112 | 2.0474 |
| 1.5953 | 19.0 | 5396 | 2.0587 |
| 1.5648 | 20.0 | 5680 | 2.0679 |
| 1.5648 | 21.0 | 5964 | 2.0745 |
| 1.5413 | 22.0 | 6248 | 2.0836 |
| 1.5238 | 23.0 | 6532 | 2.0890 |
| 1.5238 | 24.0 | 6816 | 2.0969 |
| 1.5043 | 25.0 | 7100 | 2.1035 |
| 1.5043 | 26.0 | 7384 | 2.1091 |
| 1.4936 | 27.0 | 7668 | 2.1135 |
| 1.4936 | 28.0 | 7952 | 2.1172 |
| 1.4822 | 29.0 | 8236 | 2.1186 |
| 1.4783 | 30.0 | 8520 | 2.1199 |
### Framework versions
- Transformers 4.9.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 1.9.1.dev0
- Tokenizers 0.10.3
|
Dr-BERT/DrBERT-7GB | Dr-BERT | "2023-05-28T17:37:44Z" | 884 | 12 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"camembert",
"fill-mask",
"medical",
"chemistry",
"biomedical",
"life science",
"fr",
"dataset:Dr-BERT/NACHOS",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-12-25T22:05:07Z" | ---
license: apache-2.0
datasets:
- Dr-BERT/NACHOS
language:
- fr
library_name: transformers
tags:
- medical
- chemistry
- biomedical
- life science
widget:
- text: "Le patient est atteint d'une <mask>."
---
<p align="center">
<img src="https://github.com/qanastek/DrBERT/blob/main/assets/logo.png?raw=true" alt="drawing" width="250"/>
</p>
# DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains
In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains.
In this paper, we propose an original study of PLMs in the medical domain on French language. We compare, for the first time, the performance of PLMs trained on both public data from the web and private data from healthcare establishments. We also evaluate different learning strategies on a set of biomedical tasks.
Finally, we release the first specialized PLMs for the biomedical field in French, called DrBERT, as well as the largest corpus of medical data under free license on which these models are trained.
# 1. DrBERT models
**DrBERT** is a French RoBERTa trained on a open source corpus of French medical crawled textual data called NACHOS. Models with different amount of data from differents public and private sources are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French supercomputer. Only the weights of the models trained using exclusively open-sources data are publicly released to prevent any personnal information leak and to follow the european GDPR laws :
| Model name | Corpus | Number of layers | Attention Heads | Embedding Dimension | Sequence Length | Model URL |
| :------: | :---: | :---: | :---: | :---: | :---: | :---: |
| `DrBERT-7-GB-cased-Large` | NACHOS 7 GB | 24 | 16 | 1024 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-7GB-Large) |
| `DrBERT-7-GB-cased` | NACHOS 7 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-7GB) |
| `DrBERT-4-GB-cased` | NACHOS 4 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-4GB) |
| `DrBERT-4-GB-cased-CP-CamemBERT` | NACHOS 4 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-4GB-CP-CamemBERT) |
| `DrBERT-4-GB-cased-CP-PubMedBERT` | NACHOS 4 GB | 12 | 12 | 768 | 512 | [HuggingFace](https://huggingface.co/Dr-BERT/DrBERT-4GB-CP-PubMedBERT) |
# 2. Using DrBERT
You can use DrBERT with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
Loading the model and tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Dr-BERT/DrBERT-7GB")
model = AutoModel.from_pretrained("Dr-BERT/DrBERT-7GB")
```
Perform the mask filling task :
```python
from transformers import pipeline
fill_mask = pipeline("fill-mask", model="Dr-BERT/DrBERT-7GB", tokenizer="Dr-BERT/DrBERT-7GB")
results = fill_mask("La patiente est atteinte d'une <mask>")
```
# 3. Pre-training DrBERT tokenizer and model from scratch by using HuggingFace Transformers Library
## 3.1 Install dependencies
```bash
accelerate @ git+https://github.com/huggingface/accelerate@66edfe103a0de9607f9b9fdcf6a8e2132486d99b
datasets==2.6.1
sentencepiece==0.1.97
protobuf==3.20.1
evaluate==0.2.2
tensorboard==2.11.0
torch >= 1.3
```
## 3.2 Download NACHOS Dataset text file
Download the full NACHOS dataset from [Zenodo]() and place it the the `from_scratch` or `continued_pretraining` directory.
## 3.3 Build your own tokenizer from scratch based on NACHOS
Note : This step is required only in the case of an from scratch pre-training, if you want to do a continued pre-training you just have to download the model and the tokenizer that correspond to the model you want to continue the training from. In this case, you simply have to go to the HuggingFace Hub, select a model (for example [RoBERTa-base](https://huggingface.co/roberta-base)). Finally, you have to download the entire model / tokenizer repository by clicking on the `Use In Transformers` button and get the Git link `git clone https://huggingface.co/roberta-base`.
Build the tokenizer from scratch on your data of the file `./corpus.txt` by using `./build_tokenizer.sh`.
## 3.4 Preprocessing and tokenization of the dataset
First, replace the field `tokenizer_path` of the shell script to match the path of your tokenizer directory downloaded before using HuggingFace Git or the one you have build.
Run `./preprocessing_dataset.sh` to generate the tokenized dataset by using the givent tokenizer.
## 3.5 Model training
First, change the number of GPUs `--ntasks=128` you are needing to match your computational capabilities in the shell script called `run_training.sh`. In our case, we used 128 V100 32 GB GPUs from 32 nodes of 4 GPUs (`--ntasks-per-node=4` and `--gres=gpu:4`) during 20 hours (`--time=20:00:00`).
If you are using Jean Zay, you also need to change the `-A` flag to match one of your `@gpu` profile capable of running the job. You also need to move **ALL** of your datasets, tokenizer, script and outputs on the `$SCRATCH` disk space to preserve others users of suffuring of IO issues.
### 3.5.1 Pre-training from scratch
Once the SLURM parameters updated, you have to change name of the model architecture in the flag `--model_type="camembert"` and to update the `--config_overrides=` according to the specifications of the architecture you are trying to train. In our case, RoBERTa had a `514` sequence length, a vocabulary of `32005` (32K tokens of the tokenizer and 5 of the model architecture) tokens, the identifier of the beginning-of-sentence token (BOS) and end-of-sentence token (EOS) are respectivly `5` and `6`. Change the
Then, go to `./from_scratch/` directory.
Run `sbatch ./run_training.sh` to send the training job in the SLURM queue.
### 3.5.2 continue pre-training
Once the SLURM parameters updated, you have to change path of the model / tokenizer you want to start from `--model_name_or_path=` / `--tokenizer_name=` to the path of the model downloaded from HuggingFace's Git in the section 3.3.
Then, go to `./continued_pretraining/` directory.
Run `sbatch ./run_training.sh` to send the training job in the SLURM queue.
# 4. Fine-tuning on a downstream task
You just need to change the name of the model to `Dr-BERT/DrBERT-7GB` in any of the examples given by HuggingFace's team [here](https://huggingface.co/docs/transformers/tasks/sequence_classification).
# Citation BibTeX
```bibtex
@inproceedings{labrak2023drbert,
title = {{DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains}},
author = {Labrak, Yanis and Bazoge, Adrien and Dufour, Richard and Rouvier, Mickael and Morin, Emmanuel and Daille, Bรฉatrice and Gourraud, Pierre-Antoine},
booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL'23), Long Paper},
month = july,
year = 2023,
address = {Toronto, Canada},
publisher = {Association for Computational Linguistics}
}
```
|
lfcc/bert-portuguese-squad2 | lfcc | "2023-01-04T20:11:41Z" | 884 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-01-04T12:58:13Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-portuguese-squad2
results: []
widget:
- text: O que รฉ que Mateusz Morawiecki quer?
context: >-
O primeiro-ministro da Polรณnia, Mateusz Morawiecki, manifestou-se hoje
favorรกvel ร reintroduรงรฃo da pena de morte para delitos mais graves, uma
opรงรฃo na qual reconheceu divergรชncias com a Igreja Catรณlica e que o afastaria
das regras europeias nesta รกrea.
A Carta de direitos fundamentais da Uniรฃo Europeia (UE) estabelece no artigo
2.ยบ que "ninguรฉm pode ser condenado ร pena de morte nem executado", mas
Morawiecki defendeu a reabertura do debate na Polรณnia e advertiu que o mundo
pode estar a avanรงar apressadamente para a sua aboliรงรฃo.
- text: Qual foi o instrumento usado?
context: >-
Um homem esfaqueou e matou a mulher no Seixal, na manhรฃ desta segunda-feira,
e tentou suicidar-se. Segundo o Jornal de Notรญcias, o filho do casal terรก sofrido
ferimentos ligeiros em circunstรขncias ainda por apurar. O crime estรก a ser
investigado pela PJ de Setรบbal.
O homicรญdio aconteceu em casa na Avenida do Pinhal Vidal, no Seixal. O agressor atacou
a vรญtima com uma arma branca e, segundo o Correio da Manhรฃ, o filho terรก surpreendido o pai
a atacar a mรฃe e tentou defendรช-la.
O atacante, de 55 anos, foi transportado para o Hospital Garcia de Orta em estado
crรญtico e e estรก ร guarda policial no hospital, enquantoโ o filho do casal, um homem
com 22 anos, que apresentava ferimentos numa mรฃo, recebeu tratamento hospitalar.
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-portuguese-squad2
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on SQuAD_v2 dataset, translated for portuguese.
## Model description
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
|
digiplay/RealismEngine_v1 | digiplay | "2023-07-19T06:46:24Z" | 884 | 5 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-07-19T05:21:19Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/17277/realism-engine?modelVersionId=20414
fit for interior design, ad design , fantasy poster design
*BASE ON SD 2.1
Original Author's DEMO images :



More fit LYCORIS here:
https://civitai.com/models/111737/irish-style-interior-design
https://civitai.com/models/110288/gothic-style-interior-design
https://civitai.com/models/108304/arabic-style-interior-design
https://civitai.com/models/108295/antique-style-interior-design
...
https://civitai.com/user/Sa_May/models |
alvdansen/soft-and-squishy-linework | alvdansen | "2024-06-16T16:23:01Z" | 884 | 15 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-image | "2024-06-15T15:44:10Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
a little faun boy with horns and curly hair, spots on his face, wearing an
oversized t shirt
output:
url: images/ComfyUI_01387_.png
- text: a toad
output:
url: images/ComfyUI_01385_.png
- text: >-
a girl with blonde-brown hair and big round glasses, blue eyes, white t
shirt, shorts, ram horns, tiefling
output:
url: images/ComfyUI_01383_.png
- text: >-
a girl with blonde-brown hair and big round glasses, blue eyes, white t
shirt, shorts
output:
url: images/ComfyUI_01381_.png
- text: >-
a young girl with a gentle and thoughtful expression sitting on the floor
surrounded by pills, soft blue eyes, big round glasses, pink hair, messy
apartment, hollowness, glitch art
output:
url: images/ComfyUI_01379_.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: cc-by-nc-4.0
---
# Soft & Squishy Linework
<Gallery />
## Model description
A really fun model to prompt with no trigger, intended for anime style soft and sort of lofi linework.
I like running it at 0.9 but it works at other weights as well. Better with Euler A in my opinion.
Sometimes I will use 'messy' and 'bad linework' for a small neg. prompt.
## Download model
Weights for this model are available in Safetensors format.
Model release is for research purposes only. For commercial use, please contact me directly.
[Download](/alvdansen/soft-and-squishy-linework/tree/main) them in the Files & versions tab.
|
AkshatSurolia/ICD-10-Code-Prediction | AkshatSurolia | "2023-05-05T15:24:14Z" | 883 | 38 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:04Z" | ---
license: apache-2.0
tags:
- text-classification
---
# Clinical BERT for ICD-10 Prediction
The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries.
---
## How to use the model
Load the model via the transformers library:
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
model = BertForSequenceClassification.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
config = model.config
Run the model with clinical diagonosis text:
text = "subarachnoid hemorrhage scalp laceration service: surgery major surgical or invasive"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
Return the Top-5 predicted ICD-10 codes:
results = output.logits.detach().cpu().numpy()[0].argsort()[::-1][:5]
return [ config.id2label[ids] for ids in results] |
Helsinki-NLP/opus-mt-xh-en | Helsinki-NLP | "2023-08-16T12:08:52Z" | 883 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"xh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-xh-en
* source languages: xh
* target languages: en
* OPUS readme: [xh-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.xh.en | 45.8 | 0.610 |
|
uer/gpt2-chinese-couplet | uer | "2023-10-17T15:12:44Z" | 883 | 8 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"zh",
"arxiv:1909.05658",
"arxiv:2212.06385",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: zh
widget:
- text: "[CLS]ๅฝ ่ฒ ๅคฉ ้ฆ ๏ผ ๅงน ็ดซ ๅซฃ ็บข ๏ผ ็ขง ๆฐด ้ ไบ ๆฌฃ ๅ
ฑ ่ต -"
---
# Chinese Couplet GPT2 Model
## Model description
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the model could also be pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
The model is used to generate Chinese couplets. You can download the model from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-chinese-couplet](https://huggingface.co/uer/gpt2-chinese-couplet).
Since the parameter skip_special_tokens is used in the pipelines.py, special tokens such as [SEP], [UNK] will be deleted, the output results of Hosted inference API (right) may not be properly displayed..
## How to use
You can use the model directly with a pipeline for text generation:
When the parameter skip_special_tokens is True:
```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-couplet")
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-couplet")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("[CLS]ไธน ๆซ ๆฑ ๅท ไบบ ๅ ๅป -", max_length=25, do_sample=True)
[{'generated_text': '[CLS]ไธน ๆซ ๆฑ ๅท ไบบ ๅ ๅป - ้ป ๅถ ๅฃฐ ไป ๅคฉ ๅค ๆฅ ้
ๆ'}]
```
When the parameter skip_special_tokens is False:
```python
>>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-couplet")
>>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-couplet")
>>> text_generator = TextGenerationPipeline(model, tokenizer)
>>> text_generator("[CLS]ไธน ๆซ ๆฑ ๅท ไบบ ๅ ๅป -", max_length=25, do_sample=True)
[{'generated_text': '[CLS]ไธน ๆซ ๆฑ ๅท ไบบ ๅ ๅป - ้ป ๅถ ๅฃฐ ๆ ้
ไธ ่พ [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP]'}]
```
## Training data
Training data contains 700,000 Chinese couplets which are collected by [couplet-clean-dataset](https://github.com/v-zich/couplet-clean-dataset).
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 25,000 steps with a sequence length of 64.
```
python3 preprocess.py --corpus_path corpora/couplet.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path couplet_dataset.pt --processes_num 16 \
--seq_length 64 --data_processor lm
```
```
python3 pretrain.py --dataset_path couplet_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/gpt2/config.json \
--output_model_path models/couplet_gpt2_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 25000 --save_checkpoint_steps 5000 --report_steps 1000 \
--learning_rate 5e-4 --batch_size 64
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path models/couplet_gpt2_model.bin-25000 \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
@article{zhao2023tencentpretrain,
title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
journal={ACL 2023},
pages={217},
year={2023}
}
``` |
timm/coat_tiny.in1k | timm | "2023-04-24T03:44:08Z" | 883 | 0 | timm | [
"timm",
"pytorch",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.06399",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-24T03:44:03Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for coat_tiny.in1k
A CoaT (Co-Scale Conv-Attentional Transformer) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 5.5
- GMACs: 4.3
- Activations (M): 27.2
- Image size: 224 x 224
- **Papers:**
- Co-Scale Conv-Attentional Image Transformers: https://arxiv.org/abs/2104.06399
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/mlpc-ucsd/CoaT
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('coat_tiny.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coat_tiny.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (torch.Size([1, 785, 152]), torch.Size([1, 197, 152]), torch.Size([1, 50, 152])) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@InProceedings{Xu_2021_ICCV,
author = {Xu, Weijian and Xu, Yifan and Chang, Tyler and Tu, Zhuowen},
title = {Co-Scale Conv-Attentional Image Transformers},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {9981-9990}
}
```
|
TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF | TheBloke | "2023-09-27T12:46:53Z" | 883 | 4 | transformers | [
"transformers",
"gguf",
"llama",
"llama-2",
"text-generation",
"en",
"base_model:Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged",
"license:agpl-3.0",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-31T16:30:31Z" | ---
language:
- en
license: agpl-3.0
library_name: transformers
tags:
- llama
- llama-2
model_name: Llama 2 13B Chat - LimaRP v2 Merged
base_model: Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged
inference: false
model_creator: Doctor-Shotgun
model_type: llama
pipeline_tag: text-generation
prompt_template: "### Instruction:\nCharacter's Persona: bot character description\n\
\nUser's persona: user character description\n \nScenario: what happens in the\
\ story\n\nPlay the role of Character. You must engage in a roleplaying chat with\
\ User below this line. Do not write dialogues and narration for User. Character\
\ should respond with messages of medium length.\n\n### Input:\nUser: {prompt}\n\
\n### Response:\nCharacter: \n"
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 13B Chat - LimaRP v2 Merged - GGUF
- Model creator: [Doctor-Shotgun](https://huggingface.co/Doctor-Shotgun)
- Original model: [Llama 2 13B Chat - LimaRP v2 Merged](https://huggingface.co/Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Doctor-Shotgun's Llama 2 13B Chat - LimaRP v2 Merged](https://huggingface.co/Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF)
* [Doctor-Shotgun's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: LimaRP-Alpaca
```
### Instruction:
Character's Persona: bot character description
User's persona: user character description
Scenario: what happens in the story
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length.
### Input:
User: {prompt}
### Response:
Character:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `agpl-3.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Doctor-Shotgun's Llama 2 13B Chat - LimaRP v2 Merged](https://huggingface.co/Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-2-13b-chat-limarp-v2-merged.Q2_K.gguf](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF/blob/main/llama-2-13b-chat-limarp-v2-merged.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-2-13b-chat-limarp-v2-merged.Q3_K_S.gguf](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF/blob/main/llama-2-13b-chat-limarp-v2-merged.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [llama-2-13b-chat-limarp-v2-merged.Q3_K_M.gguf](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF/blob/main/llama-2-13b-chat-limarp-v2-merged.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [llama-2-13b-chat-limarp-v2-merged.Q3_K_L.gguf](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF/blob/main/llama-2-13b-chat-limarp-v2-merged.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [llama-2-13b-chat-limarp-v2-merged.Q4_0.gguf](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF/blob/main/llama-2-13b-chat-limarp-v2-merged.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-2-13b-chat-limarp-v2-merged.Q4_K_S.gguf](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF/blob/main/llama-2-13b-chat-limarp-v2-merged.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [llama-2-13b-chat-limarp-v2-merged.Q4_K_M.gguf](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF/blob/main/llama-2-13b-chat-limarp-v2-merged.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [llama-2-13b-chat-limarp-v2-merged.Q5_0.gguf](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF/blob/main/llama-2-13b-chat-limarp-v2-merged.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-2-13b-chat-limarp-v2-merged.Q5_K_S.gguf](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF/blob/main/llama-2-13b-chat-limarp-v2-merged.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [llama-2-13b-chat-limarp-v2-merged.Q5_K_M.gguf](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF/blob/main/llama-2-13b-chat-limarp-v2-merged.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [llama-2-13b-chat-limarp-v2-merged.Q6_K.gguf](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF/blob/main/llama-2-13b-chat-limarp-v2-merged.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [llama-2-13b-chat-limarp-v2-merged.Q8_0.gguf](https://huggingface.co/TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF/blob/main/llama-2-13b-chat-limarp-v2-merged.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF and below it, a specific filename to download, such as: llama-2-13b-chat-limarp-v2-merged.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF llama-2-13b-chat-limarp-v2-merged.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF llama-2-13b-chat-limarp-v2-merged.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-2-13b-chat-limarp-v2-merged.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction:\nCharacter's Persona: bot character description\n\nUser's persona: user character description\n \nScenario: what happens in the story\n\nPlay the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length.\n\n### Input:\nUser: {prompt}\n\n### Response:\nCharacter:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/llama-2-13B-chat-limarp-v2-merged-GGUF", model_file="llama-2-13b-chat-limarp-v2-merged.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Doctor-Shotgun's Llama 2 13B Chat - LimaRP v2 Merged
# Model Card: llama-2-13b-chat-limarp-v2-merged
This is a Llama 2-based model consisting of Llama 2 13b chat (https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) merged with LIMARP Lora v2 (https://huggingface.co/lemonilia/limarp-llama2-v2).
Requested by @dampf
## Usage:
Intended to be prompted with the Alpaca instruction format of the LIMARP v2:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length.
### Input:
Character: {utterance}
### Response:
User: {utterance}
```
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the base model and lora for details.
<!-- original-model-card end -->
|
Yntec/DucHaitenLofi | Yntec | "2023-10-29T12:52:32Z" | 883 | 3 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"3D",
"Style",
"DucHaiten",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-10-29T10:41:08Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- 3D
- Style
- DucHaiten
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# DucHaiten Lofi A
This model with the Blessed2 VAE baked in. Original page: https://civitai.com/models/65706?modelVersionId=70355
You can support the creator at https://linktr.ee/Duc_Haiten
Sample and prompt:

pretty cute little girl in supermoon ON THE SUNSET WITH CLOUDS AND SHINNING RAYS REFLECTIONON LAKE PHOTOREALISTIC |
BVRA/MegaDescriptor-T-224 | BVRA | "2024-01-05T01:37:22Z" | 883 | 0 | wildlife-datasets | [
"wildlife-datasets",
"pytorch",
"image-classification",
"ecology",
"animals",
"re-identification",
"arxiv:2103.14030",
"arxiv:2304.07193",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | "2023-11-06T12:29:10Z" | ---
tags:
- image-classification
- ecology
- animals
- re-identification
library_name: wildlife-datasets
license: cc-by-nc-4.0
---
# Model card for MegaDescriptor-T-224
A Swin-T image feature model. Superwisely pre-trained on animal re-identification datasets.
## Model Details
- **Model Type:** Animal re-identification / feature backbone
- **Model Stats:**
- Params (M): 28.3
- Image size: 224 x 224
- Architecture: swin_tiny_patch4_window7_224
- **Paper:** [WildlifeDatasets_An_Open-Source_Toolkit_for_Animal_Re-Identification](https://openaccess.thecvf.com/content/WACV2024/html/Cermak_WildlifeDatasets_An_Open-Source_Toolkit_for_Animal_Re-Identification_WACV_2024_paper.html)
- **Related Papers:**
- [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030)
- [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/pdf/2304.07193.pdf)
- **Pretrain Dataset:** All available re-identification datasets --> https://github.com/WildlifeDatasets/wildlife-datasets
## Model Usage
### Image Embeddings
```python
import timm
import torch
import torchvision.transforms as T
from PIL import Image
from urllib.request import urlopen
model = timm.create_model("hf-hub:BVRA/MegaDescriptor-T-224", pretrained=True)
model = model.eval()
train_transforms = T.Compose([T.Resize(224),
T.ToTensor(),
T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
output = model(train_transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{vcermak2024wildlifedatasets,
title={WildlifeDatasets: An open-source toolkit for animal re-identification},
author={{\v{C}}erm{\'a}k, Vojt{\v{e}}ch and Picek, Lukas and Adam, Luk{\'a}{\v{s}} and Papafitsoros, Kostas},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={5953--5963},
year={2024}
}
``` |
sambanovasystems/SambaLingo-Turkish-Chat | sambanovasystems | "2024-04-16T22:40:23Z" | 883 | 43 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"tr",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"dataset:HuggingFaceH4/cai-conversation-harmless",
"arxiv:2404.05829",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-15T22:46:59Z" | ---
license: llama2
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
- HuggingFaceH4/cai-conversation-harmless
language:
- tr
- en
---
# SambaLingo-Turkish-Chat
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- Provide a quick summary of what the model is/does. -->
SambaLingo-Turkish-Chat is a human aligned chat model trained in Turkish and English. It is trained using direct preference optimization on top the base model [SambaLingo-Turkish-Base](https://huggingface.co/sambanovasystems/SambaLingo-Turkish-Base). The base model adapts [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) to Turkish by training on 42 billion tokens from the Turkish split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try this model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Turkish, English
- **Finetuned from model:** [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf)
- **Try this model:** [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space)
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Turkish-Chat", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Turkish-Chat", device_map="auto", torch_dtype="auto")
```
### Interacting With Model Pipeline
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Turkish-Chat", device_map="auto", use_fast=False)
messages = [
{"role": "user", "content": {YOUR_QUESTION}},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt)[0]
outputs = outputs["generated_text"]
```
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.0
- Top-p: 0.9
### Prompting Guidelines
To prompt this model, please use the following chat template:
```
<|user|>\n{question}</s>\n<|assistant|>\n
```
## Training Details
The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and ฮฒ=0.1 as the regularization factor for DPO.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Use of this model is governed by the Metaโs [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
KingNish/KingNish-Llama3-8b | KingNish | "2024-05-06T08:02:10Z" | 883 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct",
"mlabonne/ChimeraLlama-3-8B-v3",
"conversational",
"base_model:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct",
"base_model:mlabonne/ChimeraLlama-3-8B-v3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-06T07:53:17Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- mlabonne/ChimeraLlama-3-8B-v3
base_model:
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- mlabonne/ChimeraLlama-3-8B-v3
license: mit
pipeline_tag: text-generation
---
# KingNish-Llama3-8b
KingNish-Llama3-8b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct)
* [mlabonne/ChimeraLlama-3-8B-v3](https://huggingface.co/mlabonne/ChimeraLlama-3-8B-v3)
## ๐งฉ Configuration
```yaml
models:
- model: nbeerbower/llama-3-gutenberg-8B
# No parameters necessary for base model
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
parameters:
density: 0.6
weight: 0.4
- model: mlabonne/ChimeraLlama-3-8B-v3
parameters:
density: 0.65
weight: 0.3
merge_method: dare_ties
base_model: nbeerbower/llama-3-gutenberg-8B
parameters:
int8_mask: true
dtype: float16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "KingNish/KingNish-Llama3-8b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
anas-awadalla/mpt-1b-redpajama-200b-dolly | anas-awadalla | "2023-07-30T00:23:55Z" | 882 | 0 | transformers | [
"transformers",
"pytorch",
"mosaic_gpt",
"text-generation",
"custom_code",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2108.12409",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2023-06-03T19:00:49Z" | ---
license: cc-by-sa-3.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
# MPT-1b-RedPajama-200b-dolly
MPT-1b-RedPajama-200b-dolly is a 1.3 billion parameter decoder-only transformer pre-trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) and subsequently fine-tuned on the [Databricks Dolly](https://github.com/databrickslabs/dolly/tree/master/data) instruction dataset.
The model was pre-trained for 200B tokens by sampling from the subsets of the RedPajama dataset in the same proportions as were used by the [Llama series of models](https://arxiv.org/abs/2302.13971).
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
This model is an instruction fine-tuned version of [mpt-1b-redpajama-200b](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b). In other words, the pre-trained version of this model is [mpt-1b-redpajama-200b](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b).
## Model Date
April 20, 2023
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom model architecture `MosaicGPT` that is not yet part of the `transformers` package.
`MosaicGPT` includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALIBI](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-1b-redpajama-200b-dolly', trust_remote_code=True)
```
To use the optimized triton implementation of FlashAttention, you can load with `attn_impl='triton'` and move the model to `bfloat16` like so:
```python
model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-1b-redpajama-200b-dolly', trust_remote_code=True, attn_impl='triton')
model.to(device='cuda:0', dtype=torch.bfloat16)
```
## Model Description
This model uses the MosaicML LLM codebase, which can be found in the [MosaicML Examples Repository](https://github.com/mosaicml/examples/tree/v0.0.4/examples/llm).
The architecture is a modification of a standard decoder-only transformer.
The transformer has 24 layers, 16 attention heads, and width 2048.
The model has been modified from a standard transformer in the following ways:
* It uses ALiBi and does not use positional embeddings.
* It uses QK LayerNorm.
* It does not use biases.
## Training Data
### Pre-Training
The model was pre-trained for 200B tokens (batch size 2200, sequence length 2048). It was trained on the following data mix:
* 67% RedPajama Common Crawl
* 15% [C4](https://huggingface.co/datasets/c4)
* 4.5% RedPajama GitHub
* 4.5% RedPajama Wikipedia
* 4.5% RedPajama Books
* 2.5% RedPajama Arxiv
* 2% RedPajama StackExchange
This is the same mix of data as was used in the Llama series of models](https://arxiv.org/abs/2302.13971).
Each sample was chosen from one of the datasets, with the dataset selected with the probability specified above.
The examples were shuffled within each dataset.
Each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Fine-Tuning
We fine tuned this model on the [databricks-dolly-15k dataset](https://github.com/databrickslabs/dolly/tree/master/data) released by Databricks, following the same hyperparameters found in their [train_dolly.py](https://github.com/databrickslabs/dolly/blob/master/train_dolly.py) script.
## Training Configuration
This model was pre-trained on 440 A100-40GBs for about half a day using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was pre-trained with sharded data parallelism using FSDP.
## Acknowledgements
This model builds on the work of [Together](https://www.together.xyz), which created the RedPajama dataset with the goal of mimicking the training data used to create the Llama series of models.
We gratefully acknowledge the hard work of the team that put together this dataset, and we hope this model serves as a useful companion to that work.
This model also builds on the work of [Databricks](https://www.databricks.com/), which created the Dolly instruction fine-tuning dataset.
We also gratefully acknowledge the work of the researchers who created the Llama series of models, which was the impetus for our efforts and those who worked on the RedPajama project. |
tonyassi/celebrity-classifier | tonyassi | "2024-01-29T17:05:19Z" | 882 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-01-06T19:16:14Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: celebrity-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Celebrity Classifier
## Model description
This model classifies a face to a celebrity. It is trained on [tonyassi/celebrity-1000](https://huggingface.co/datasets/tonyassi/celebrity-1000) dataset and fine-tuned on [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
## Dataset description
[tonyassi/celebrity-1000](https://huggingface.co/datasets/tonyassi/celebrity-1000)
Top 1000 celebrities. 18,184 images. 256x256. Square cropped to face.
### How to use
```python
from transformers import pipeline
# Initialize image classification pipeline
pipe = pipeline("image-classification", model="tonyassi/celebrity-classifier")
# Perform classification
result = pipe('image.png')
# Print results
print(result)
```
## Training and evaluation data
It achieves the following results on the evaluation set:
- Loss: 0.9089
- Accuracy: 0.7982
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
cloudyu/Mixtral_13B_Chat | cloudyu | "2024-02-17T11:20:06Z" | 882 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-17T11:07:07Z" | ---
license: mit
---
This is a fine-tuned 13B chat model
code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_13B_Chat"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=800,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
output examples
```
<s> 12+13-24=?
To solve this equation, we need to follow the order of operations (PEMDAS/BODMAS): Parentheses, Exponents, Multiplication and Division (from left to right), Addition and Subtraction (from left to right).
Given expression: 12 + 13 - 24
First, let's add 12 and 13 together:
12 + 13 = 25
Now our expression looks like this: 25 - 24
Next, subtract 24 from 25:
25 - 24 = 1
So, the final answer is 1.</s>
```
```
<s> write a story about yosemite national park in 1000 words.The sun had just begun to rise over the horizon, casting its golden light upon Yosemite National Park's majestic granite cliffs and towering sequoias. As the first rays of sunlight touched the earth, they illuminated the misty waterfalls that cascaded down from high above, creating an ethereal scene straight out of a fairytale.
In this enchanting land, where nature reigned supreme, there lived a young girl named Ava who had come here seeking solace after losing her parents in a tragic accident. She found herself drawn to the beauty and serenity of Yosemite, which seemed to offer a sense of peace she couldnโt find anywhere else.
Ava spent most days exploring the vast expanse of the park, hiking through meadows filled with wildflowers or scaling rock formations that offered breathtaking views of the valley below. At night, she would sit by the campfire, gazing up at the star-studded sky while listening to the soothing sounds of the river flowing nearby. It was during these moments that she felt closest to her parents, as if their spirits were guiding her on this journey towards healing.
One day, while wandering along one of the many trails leading into the heart of the park, Ava stumbled upon something unexpected โ a small wooden box nestled among the roots of an ancient tree. Curious, she picked it up and opened it to discover a handwritten letter inside. The paper was yellowed with age but still legible, bearing the name "John Muir" at the top.
As she read aloud his words, Ava learned that John Muir had been instrumental in preserving Yosemite National Park back when it was threatened by development projects. He believed deeply in protecting our natural resources for future generations, and he poured his passion into writing letters to politicians and influential figures, urging them to take action. His efforts eventually paid off, resulting in the establishment of what is now known as America's first national park.
Feeling inspired by John Muir's dedication to conservation, Ava decided to follow in his footsteps by penning her own plea for environmental protection. With renewed purpose, she returned home and began researching ways to make a difference. After months of hard work, she drafted a proposal detailing various initiatives aimed at reducing carbon emissions, promoting sustainable practices, and raising awareness about climate change.
Her plan caught the attention of several prominent environmental organizations, who agreed to support her cause wholeheartedly. Together, they launched a campaign called "Save Our Earth: One Step at a Time," encouraging people worldwide to adopt eco-friendly habits such as recycling, using public transportation whenever possible, and planting trees.
Over time, the movement gained momentum, attracting supporters from all walks of life. Celebrities lent their voices to raise awareness, businesses pledged to reduce their carbon footprint, and governments around the globe started implementing policies designed to protect the environment. Slowly but surely, progress was being made.
Years passed since Ava's discovery in Yosemite National Park, yet the memory of finding John Muir's letter remained etched in her mind like a cherished treasure. Now married with two children of her own, she continued advocating for environmental justice alongside her husband, who shared her passion for preservation. Their family often visited Yosemite together, passing on stories about John Muir and his legacy to their kids.
On one particular trip, while hiking through a dense forest, Ava noticed something peculiar โ a group of loggers cutting down trees without any regard for the surrounding ecosystem. Angered by this blatant disregard for nature, she approached the men and demanded an explanation. They replied dismissively, claiming they needed the wood for construction purposes.
Refusing to accept defeat, Ava took matters into her own hands. She gathered her family and friends, forming a human chain around the area designated for logging. Determined not to let anyone harm the precious forest, they stood firm against the loggers until authorities arrived on site. Eventually, the situation escalated into a standoff between both parties, drawing media attention from across the country.
During this tense standoff, Ava recounted her experience with John Muir's letter and how it led her to create the "Save Our Earth" initiative. Her words resonated strongly with those present, sparking conversations about sustainability and responsible resource management. Ultimately, the loggers relented under public pressure, agreeing to cease operations within the protected area.
This incident marked a turning point in Ava's crusade for environmental protection. From then onwards, she dedicated herself fully to spreading awareness about the importance of conserving our planet's natural wonders. Through her tireless efforts, more people became aware of the need for sustainable living practices, ultimately contributing to positive changes in government policies and corporate behavior.
Today, Yosemite National Park remains a testament to the power of individual actions combined with collective effort. Its pristine landscapes continue inspiring countless visitors each year, reminding us all that we have a responsibility towards safeguarding our planet for future generations. And amidst these stunning vistas stands Ava, proudly carrying forth John Muir's legacy, ensuring that his dream of preserving nature lives on forever.</s>
``` |
ivrit-ai/whisper-large-v2-tuned | ivrit-ai | "2024-03-26T11:02:30Z" | 882 | 23 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"dataset:ivrit-ai/whisper-training",
"arxiv:2307.08720",
"arxiv:2212.04356",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-03-24T08:03:20Z" | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
pipeline_tag: automatic-speech-recognition
license: apache-2.0
datasets:
- ivrit-ai/whisper-training
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation.
More details about it are available [here](https://huggingface.co/openai/whisper-large-v2).
**whisper-large-v2-tuned** is a version of whisper-large-v2, fine-tuned by [ivrit.ai](https://www.ivrit.ai) to improve Hebrew ASR using crowd-sourced labeling.
## Model details
This model comes as a single checkpoint, whisper-large-v2-tuned.
It is a 1550M parameters multi-lingual ASR solution.
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
```python
import torch
from transformers import WhisperProcessor, WhisperForConditionalGeneration
SAMPLING_RATE = 16000
has_cuda = torch.cuda.is_available()
model_path = 'ivrit-ai/whisper-large-v2-tuned'
model = WhisperForConditionalGeneration.from_pretrained(model_path)
if has_cuda:
model.to('cuda:0')
processor = WhisperProcessor.from_pretrained(model_path)
# audio_resample based on entry being part of an existing dataset.
# Alternatively, this can be loaded from an audio file.
audio_resample = librosa.resample(entry['audio']['array'], orig_sr=entry['audio']['sampling_rate'], target_sr=SAMPLING_RATE)
input_features = processor(audio_resample, sampling_rate=SAMPLING_RATE, return_tensors="pt").input_features
if has_cuda:
input_features = input_features.to('cuda:0')
predicted_ids = model.generate(input_features, language='he', num_beams=5)
transcript = processor.batch_decode(predicted_ids, skip_special_tokens=True)
print(f'Transcript: {transcription[0]}')
```
## Evaluation
You can use the [evaluate_model.py](https://github.com/yairl/ivrit.ai/blob/master/evaluate_model.py) reference on GitHub to evalute the model's quality.
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="ivrit-ai/whisper-large-v2-tuned",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
### BibTeX entry and citation info
**ivrit.ai: A Comprehensive Dataset of Hebrew Speech for AI Research and Development**
```bibtex
@misc{marmor2023ivritai,
title={ivrit.ai: A Comprehensive Dataset of Hebrew Speech for AI Research and Development},
author={Yanir Marmor and Kinneret Misgav and Yair Lifshitz},
year={2023},
eprint={2307.08720},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
**Whisper: Robust Speech Recognition via Large-Scale Weak Supervision**
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF | mradermacher | "2024-06-02T07:26:29Z" | 882 | 3 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-01T14:59:44Z" | ---
base_model: NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
- nsfw
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Lumimaid-70B-v0.1-OAS-i1-GGUF/resolve/main/Llama-3-Lumimaid-70B-v0.1-OAS.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
codeparrot/codeparrot-small-multi | codeparrot | "2022-07-15T10:56:13Z" | 881 | 7 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"code",
"generation",
"dataset:codeparrot/github-code-clean",
"dataset:openai_humaneval",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-07-03T19:34:10Z" | ---
language:
- code
license: apache-2.0
tags:
- code
- gpt2
- generation
datasets:
- "codeparrot/github-code-clean"
- "openai_humaneval"
metrics:
- "evaluate-metric/code_eval"
---
# CodeParrot-Multi ๐ฆ (small)
CodeParrot-Multi ๐ฆ is a GPT-2 model (110M parameters) trained to generate code in 9 programming languages: "Java", "JavaScript", "PHP", "Python", "C#", "C++", "GO", "Ruby" and "TypeScript".
## Usage
You can load the CodeParrot-Multi model and tokenizer directly in `transformers`:
```Python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot-small-multi")
model = AutoModelWithLMHead.from_pretrained("codeparrot/codeparrot-small-multi")
inputs = tokenizer("def hello_world():", return_tensors="pt")
outputs = model(**inputs)
```
or with a `pipeline`:
```Python
from transformers import pipeline
pipe = pipeline("text-generation", model="codeparrot/codeparrot-small-multi")
outputs = pipe("def hello_world():")
```
## Training
The model was trained on the small [Github code small](https://huggingface.co/datasets/loubnabnl/github-small-near-dedup) after near deduplication, a subset of [Github code dataset](https://huggingface.co/datasets/codeparrot/github-code-clean) with the following settings:
|Config|Value|
|-------|-----|
|Batch size| 192 |
|Context size| 1024 |
|Training steps| 300'000|
|Gradient accumulation| 2|
|Gradient checkpointing| False|
|Learning rate| 5e-4 |
|Weight decay | 0.1 |
|Warmup steps| 2000 |
|Schedule| Cosine |
The training was executed on 16 x A100 (40GB) GPUs. This setting amounts to roughly 58 billion tokens.
## Performance
We evaluated the model on OpenAI's [HumanEval](https://huggingface.co/datasets/openai_humaneval) benchmark which consists of programming challenges:
| Metric | Value |
|-------|-----|
|pass@1 | --% |
|pass@10 | --% |
|pass@100 | --% |
The [pass@k metric](https://huggingface.co/metrics/code_eval) tells the probability that at least one out of k generations passes the tests.
## Resources
- Code: [repository](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot)
|
fatgong/5FCSJqhWc3fhSGNW2zjehm4GkbMjJPtxQGoWsP8ipJtG97dR_vgg | fatgong | "2024-03-28T13:55:09Z" | 881 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-03-09T14:17:06Z" | Entry not found |
mradermacher/CodeLlama3-8B-Python-GGUF | mradermacher | "2024-05-05T15:13:43Z" | 881 | 4 | transformers | [
"transformers",
"gguf",
"code",
"en",
"dataset:ajibawa-2023/Python-Code-23k-ShareGPT",
"base_model:Markhit/CodeLlama3-8B-Python",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-04-23T16:05:12Z" | ---
base_model: Markhit/CodeLlama3-8B-Python
datasets:
- ajibawa-2023/Python-Code-23k-ShareGPT
language:
- en
library_name: transformers
license: llama3
license_link: LICENSE
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Markhit/CodeLlama3-8B-Python
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CodeLlama3-8B-Python-GGUF/resolve/main/CodeLlama3-8B-Python.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
google/paligemma-3b-ft-ocrvqa-896 | google | "2024-06-27T14:10:32Z" | 881 | 6 | transformers | [
"transformers",
"safetensors",
"paligemma",
"pretraining",
"image-text-to-text",
"arxiv:2310.09199",
"arxiv:2303.15343",
"arxiv:2403.08295",
"arxiv:1706.03762",
"arxiv:2010.11929",
"arxiv:2209.06794",
"arxiv:2209.04372",
"arxiv:2103.01913",
"arxiv:2401.06209",
"arxiv:2305.10355",
"arxiv:2205.12522",
"arxiv:2110.11624",
"arxiv:2108.03353",
"arxiv:2010.04295",
"arxiv:2203.10244",
"arxiv:1810.12440",
"arxiv:1905.13648",
"arxiv:1608.00272",
"arxiv:1908.04913",
"license:gemma",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | image-text-to-text | "2024-05-12T22:38:24Z" | ---
library_name: transformers
license: gemma
pipeline_tag: image-text-to-text
extra_gated_heading: Access PaliGemma on Hugging Face
extra_gated_prompt: To access PaliGemma on Hugging Face, youโre required to review
and agree to Googleโs usage license. To do this, please ensure youโre logged-in
to Hugging Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# PaliGemma model card
**Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma)
Transformers PaliGemma 3B weights, fine-tuned with 896*896 input images on the <a href="https://ocr-vqa.github.io/">OCR-VQA</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/ocrvqa.py">big_vision</a>.
**Resources and technical documentation:**
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma)
* [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363)
**Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-ocrvqa-896)
**Authors:** Google
## Model information
### Model summary
#### Description
PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by
[PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as
the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma
language model](https://arxiv.org/abs/2403.08295). It takes both image and text
as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
#### Model architecture
PaliGemma is the composition of a [Transformer
decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image
encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion
params. The text decoder is initialized from
[Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is
initialized from
[SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb).
PaliGemma is trained following the PaLI-3 recipes.
#### Inputs and outputs
* **Input:** Image and text string, such as a prompt to caption the image, or
a question.
* **Output:** Generated text in response to the input, such as a caption of
the image, an answer to a question, a list of object bounding box
coordinates, or segmentation codewords.
### Model data
#### Pre-train datasets
PaliGemma is pre-trained on the following mixture of datasets:
* **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is
a web-scale multilingual image-text dataset built from the public web. A
wide range of WebLI splits are used to acquire versatile model capabilities,
such as visual semantic understanding, object localization,
visually-situated text understanding, multilinguality, etc.
* **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et
al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud
Translation API](https://cloud.google.com/translate) to translate into 34
additional languages.
* **VQยฒA-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al.,
2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the
same additional 34 languages as CC3M-35L, using the [Google Cloud
Translation API](https://cloud.google.com/translate).
* **OpenImages:** Detection and object-aware questions and answers
([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by
handcrafted rules on the [OpenImages dataset].
* **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al.,
2021](https://arxiv.org/abs/2103.01913)).
[OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html
#### Data responsibility filtering
The following filters are applied to WebLI, with the goal of training PaliGemma
on clean data:
* **Pornographic image filtering:** This filter removes images deemed to be of
pornographic nature.
* **Text safety filtering:** We identify and filter out images that are paired
with unsafe text. Unsafe text is any text deemed to contain or be about
CSAI, pornography, vulgarities, or otherwise offensive.
* **Text toxicity filtering:** We further use the [Perspective
API](https://perspectiveapi.com/) to identify and filter out images that are
paired with text deemed insulting, obscene, hateful or otherwise toxic.
* **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP)
API](https://cloud.google.com/security/products/dlp) to protect the privacy
of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed.
* **Additional methods:** Filtering based on content quality and safety in
line with our policies and practices.
[other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759
## How to Use
PaliGemma is a single-turn vision language model not meant for conversational use,
and it works best when fine-tuning to a specific use case.
You can configure which task the model will solve by conditioning it with task prefixes,
such as โdetectโ or โsegmentโ. The pretrained models were trained in this fashion to imbue
them with a rich set of capabilities (question answering, captioning, segmentation, etc.).
However, they are not designed to be used directly, but to be transferred (by fine-tuning)
to specific tasks using a similar prompt structure. For interactive testing, you can use
the "mix" family of models, which have been fine-tuned on a mixture of tasks.
Please, refer to the [usage and limitations section](#usage-and-limitations) for intended
use cases, or visit the [blog post](https://huggingface.co/blog/paligemma-google-vlm) for
additional details and examples.
## Use in Transformers
The following snippets use model `google/paligemma-3b-mix-224` for reference purposes.
The model in this repo you are now browsing may have been trained for other tasks, please
make sure you use appropriate inputs for the task at hand.
### Running the default precision (`float32`) on CPU
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt")
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
Output: `Un auto azul estacionado frente a un edificio.`
### Running other precisions on CUDA
For convenience, the repos contain revisions of the weights already converted to `bfloat16` and `float16`,
so you can use them to reduce the download size and avoid casting on your local computer.
This is how you'd run `bfloat16` on an nvidia CUDA card.
```python
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=dtype,
device_map=device,
revision="bfloat16",
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
### Loading in 4-bit / 8-bit
You need to install `bitsandbytes` to automatically run inference using 8-bit or 4-bit precision:
```
pip install bitsandbytes accelerate
```
```
from transformers import AutoProcessor, PaliGemmaForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/paligemma-3b-mix-224"
device = "cuda:0"
dtype = torch.bfloat16
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = PaliGemmaForConditionalGeneration.from_pretrained(
model_id, quantization_config=quantization_config
).eval()
processor = AutoProcessor.from_pretrained(model_id)
# Instruct the model to create a caption in Spanish
prompt = "caption es"
model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
## Implementation information
### Hardware
PaliGemma was trained using the latest generation of Tensor Processing Unit
(TPU) hardware (TPUv5e).
### Software
Training was done using [JAX](https://github.com/google/jax),
[Flax](https://github.com/google/flax),
[TFDS](https://github.com/tensorflow/datasets) and
[`big_vision`](https://github.com/google-research/big_vision).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
TFDS is used to access datasets and Flax is used for model architecture. The
PaliGemma fine-tune code and inference code are released in the `big_vision`
GitHub repository.
## Evaluation information
### Benchmark results
In order to verify the transferability of PaliGemma to a wide variety of
academic tasks, we fine-tune the pretrained models on each task. Additionally we
train the mix model with a mixture of the transfer tasks. We report results on
different resolutions to provide an impression of which tasks benefit from
increased resolution. Importantly, none of these tasks or datasets are part of
the pretraining data mixture, and their images are explicitly removed from the
web-scale pre-training data.
#### Mix model (fine-tune on mixture of transfer tasks)
<table>
<tbody><tr>
<th>Benchmark</th>
<th>Metric (split)</th>
<th>mix-224</th>
<th>mix-448</th>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td>
<td>Paired Accuracy</td>
<td>46.00</td>
<td>45.33</td>
</tr>
<tr>
<td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td>
<td>Accuracy<br>(random/popular/adversarial)</td>
<td>
88.00<br>
86.63<br>
85.67
</td>
<td>
89.37<br>
88.40<br>
87.47
</td>
</tr>
<tr>
<td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td>
<td>Accuracy (test)</td>
<td>65.20</td>
<td>65.47</td>
</tr>
</tbody></table>
#### Single task (fine-tune on single task)
<table>
<tbody><tr>
<th>Benchmark<br>(train split)</th>
<th>Metric<br>(split)</th>
<th>pt-224</th>
<th>pt-448</th>
<th>pt-896</th>
</tr>
<tr>
<th>Captioning</th>
</tr>
<tr>
<td>
<a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval)
</td>
<td>CIDEr (val)</td>
<td>141.92</td>
<td>144.60</td>
</tr>
<tr>
<td>
<a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer)
</td>
<td>CIDEr (val)</td>
<td>121.72</td>
<td>123.58</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
139.2<br>
115.8<br>
116.4
</td>
<td>
141.2<br>
118.0<br>
118.6
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer)
</td>
<td>CIDEr dev<br>(en/avg-34/avg)</td>
<td>
78.1<br>
41.3<br>
42.4
</td>
<td>
80.0<br>
41.9<br>
42.9
</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train)
</td>
<td>CIDEr (val)</td>
<td>127.48</td>
<td>153.94</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val)
</td>
<td>CIDEr/BLEU-4<br>(test)</td>
<td>
162.25<br>
0.192<br>
</td>
<td>
181.49<br>
0.211<br>
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>117.57</td>
<td>119.59</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev)
</td>
<td>CIDEr (test)</td>
<td>136.07</td>
<td>148.36</td>
</tr>
<tr>
<th>Question answering</th>
</tr>
<tr>
<td>
<a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>83.19</td>
<td>85.64</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer)
</td>
<td>Paired Accuracy</td>
<td>47.33</td>
<td>45.33</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer)
</td>
<td>Accuracy<br>(random/popular/<br>adversarial)</td>
<td>
87.80<br>
85.87<br>
84.27
</td>
<td>
88.23<br>
86.77<br>
85.90
</td>
</tr>
<tr>
<td>
<a href="https://okvqa.allenai.org/">OKVQA</a><br>(train)
</td>
<td>Accuracy (val)</td>
<td>63.54</td>
<td>63.15</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>76.37</td>
<td>76.90</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val)
</td>
<td>Accuracy<br>(Test server)</td>
<td>61.85</td>
<td>63.22</td>
</tr>
<tr>
<td>
<a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced)
</td>
<td>Accuracy<br>(testdev balanced)</td>
<td>65.61</td>
<td>67.03</td>
</tr>
<tr>
<td>
<a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer)
</td>
<td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td>
<td>58.37</td>
<td>59.07</td>
</tr>
<tr>
<td>
<a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev)
</td>
<td>Accuracy (test)</td>
<td>90.02</td>
<td>88.93</td>
</tr>
<tr>
<td>
<a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer)
</td>
<td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td>
<td>80.57</td>
<td>76.78</td>
</tr>
<tr>
<td>
<a href="https://allenai.org/data/diagrams">AI2D</a><br>(train)
</td>
<td>Accuracy (test)</td>
<td>72.12</td>
<td>73.28</td>
</tr>
<tr>
<td>
<a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>95.39</td>
<td>95.93</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test)</td>
<td>92.65</td>
<td>93.11</td>
</tr>
<tr>
<td>
<a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val)
</td>
<td>Mean Accuracy<br>(test/test2)</td>
<td>
92.61<br>
90.58
</td>
<td>
92.79<br>
90.54
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val)
</td>
<td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td>
<td>57.08</td>
<td>71.36</td>
</tr>
<tr>
<td>
<a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>
73.7
</td>
<td>
75.52
</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train)
</td>
<td>Accuracy<br>(test_simple/<br>test_complex)</td>
<td>
81.72<br>
69.56
</td>
<td>
84.86<br>
72.27
</td>
</tr>
<tr>
<td>
<a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val)
</td>
<td>Accuracy (test)</td>
<td>72.32</td>
<td>74.61</td>
<td>74.93</td>
</tr>
<tr>
<td>
<a href="https://textvqa.org/">TextVQA</a><br>(train+val)
</td>
<td>Accuracy<br>(Test server - std)</td>
<td>55.47</td>
<td>73.15</td>
<td>76.48</td>
</tr>
<tr>
<td>
<a href="https://www.docvqa.org/">DocVQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>43.74</td>
<td>78.02</td>
<td>84.77</td>
</tr>
<tr>
<td>
<a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>28.46</td>
<td>40.47</td>
<td>47.75</td>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val)
</td>
<td>ANLS (Test server)</td>
<td>63.29</td>
<td>81.82</td>
<td>84.40</td>
</tr>
<tr>
<th>Segmentation</th>
</tr>
<tr>
<td>
<a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images)
</td>
<td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td>
<td>
73.40<br>
68.32<br>
67.65
</td>
<td>
75.57<br>
69.76<br>
70.17
</td>
<td>
76.94<br>
72.18<br>
72.22
</td>
</tr>
<tr>
<th>Video tasks (Caption/QA)</th>
</tr>
<tr>
<td>MSR-VTT (Captioning)</td>
<td>CIDEr (test)</td>
<td>70.54</td>
</tr>
<tr>
<td>MSR-VTT (QA)</td>
<td>Accuracy (test)</td>
<td>50.09</td>
</tr>
<tr>
<td>ActivityNet (Captioning)</td>
<td>CIDEr (test)</td>
<td>34.62</td>
</tr>
<tr>
<td>ActivityNet (QA)</td>
<td>Accuracy (test)</td>
<td>50.78</td>
</tr>
<tr>
<td>VATEX (Captioning)</td>
<td>CIDEr (test)</td>
<td>79.73</td>
</tr>
<tr>
<td>MSVD (QA)</td>
<td>Accuracy (test)</td>
<td>60.22</td>
</tr>
</tbody></table>
## Ethics and safety
### Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Human evaluation on prompts covering child safety, content safety and
representational harms. See the [Gemma model
card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for
more details on evaluation approach, but with image captioning and visual
question answering setups.
* Image-to-Text benchmark evaluation: Benchmark against relevant academic
datasets such as FairFace Dataset ([Karkkainen et al.,
2021](https://arxiv.org/abs/1908.04913)).
### Evaluation results
* The human evaluation results of ethics and safety evaluations are within
acceptable thresholds for meeting [internal
policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11)
for categories such as child safety, content safety and representational
harms.
* On top of robust internal evaluations, we also use the Perspective API
(threshold of 0.8) to measure toxicity, profanity, and other potential
issues in the generated captions for images sourced from the FairFace
dataset. We report the maximum and median values observed across subgroups
for each of the perceived gender, ethnicity, and age attributes.
<table>
<tbody><tr>
</tr></tbody><tbody><tr><th>Metric</th>
<th>Perceived<br>gender</th>
<th></th>
<th>Ethnicity</th>
<th></th>
<th>Age group</th>
<th></th>
</tr>
<tr>
<th></th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
<th>Maximum</th>
<th>Median</th>
</tr>
<tr>
<td>Toxicity</td>
<td>0.04%</td>
<td>0.03%</td>
<td>0.08%</td>
<td>0.00%</td>
<td>0.09%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Identity Attack</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Insult</td>
<td>0.06%</td>
<td>0.04%</td>
<td>0.09%</td>
<td>0.07%</td>
<td>0.16%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Threat</td>
<td>0.06%</td>
<td>0.05%</td>
<td>0.14%</td>
<td>0.05%</td>
<td>0.17%</td>
<td>0.00%</td>
</tr>
<tr>
<td>Profanity</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
<td>0.00%</td>
</tr>
</tbody></table>
## Usage and limitations
### Intended usage
Open Vision Language Models (VLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
Fine-tune on specific vision-language task:
* The pre-trained models can be fine-tuned on a wide range of vision-language
tasks such as: image captioning, short video caption, visual question
answering, text reading, object detection and object segmentation.
* The pre-trained models can be fine-tuned for specific domains such as remote
sensing question answering, visual questions from people who are blind,
science question answering, describe UI element functionalities.
* The pre-trained models can be fine-tuned for tasks with non-textual outputs
such as bounding boxes or segmentation masks.
Vision-language research:
* The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM
techniques, develop algorithms, and contribute to the advancement of the
field.
### Ethical considerations and risks
The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following:
* Bias and Fairness
* VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card.
* Misinformation and Misuse
* VLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
* Transparency and Accountability
* This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem.
Risks identified and mitigations:
* **Perpetuation of biases:** It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* **Generation of harmful content:** Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
* **Misuse for malicious purposes:** Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the [Gemma
Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques.
### Limitations
* Most limitations inherited from the underlying Gemma model still apply:
* VLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* Natural language is inherently complex. VLMs might struggle to grasp
subtle nuances, sarcasm, or figurative language.
* VLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* VLMs rely on statistical patterns in language and images. They might
lack the ability to apply common sense reasoning in certain situations.
* PaliGemma was designed first and foremost to serve as a general pre-trained
model for transfer to specialized tasks. Hence, its "out of the box" or
"zero-shot" performance might lag behind models designed specifically for
that.
* PaliGemma is not a multi-turn chatbot. It is designed for a single round of
image and text input.
|
ALM/hubert-base-audioset | ALM | "2024-06-22T09:21:26Z" | 880 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"feature-extraction",
"music",
"audio",
"audio-classification",
"arxiv:2405.00934",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-08-29T13:36:58Z" | ---
license: cc-by-nc-sa-4.0
pipeline_tag: audio-classification
tags:
- music
- audio
---
# Model Card: Pre-trained Audio Representation Models on AudioSet
## Overview
This model card presents information about pre-trained audio representation models released by ALM. These models are pre-trained on the full AudioSet dataset and are intended for general-purpose Audio Representation Learning (ARL) tasks.
## Models
### 1. [ALM/hubert-base-audioset](https://huggingface.co/ALM/hubert-base-audioset)
- **Architecture**: HuBERT (Hubert-Base) transformer-based model
- **Description**: This model is based on the HuBERT architecture, pre-trained on the full AudioSet dataset.
### 2. [ALM/hubert-large-audioset](https://huggingface.co/ALM/hubert-large-audioset)
- **Architecture**: HuBERT (Hubert-Large) transformer-based model
- **Description**: Similar to the hubert-base-audioset model, this variant is larger in size, providing increased capacity for capturing audio representations from the full AudioSet dataset.
### 3. [ALM/wav2vec2-base-audioset](https://huggingface.co/ALM/wav2vec2-base-audioset)
- **Architecture**: Wav2Vec 2.0 (Wav2Vec2-Base) transformer-based model
- **Description**: This model is based on the Wav2Vec 2.0 architecture, trained on the full AudioSet dataset using SSL with CPC. It offers a different approach to audio representation learning compared to the HuBERT models.
### 4. [ALM/wav2vec2-large-audioset](https://huggingface.co/ALM/wav2vec2-large-audioset)
- **Architecture**: Wav2Vec 2.0 (Wav2Vec2-Large) transformer-based model
- **Description**: Similar to the wav2vec2-base-audioset model, this variant is larger in size, providing enhanced capacity for learning audio representations from the full AudioSet dataset.
## Intended Use
These pre-trained models are intended for a wide range of ARL tasks, including but not limited to speech recognition, music classification, and acoustic event detection. They serve as powerful tools for feature extraction and can be fine-tuned on task-specific datasets for downstream applications.
It's important to note that while these models offer versatility across various audio domains, their performance in speech-related tasks may be relatively lower compared to specialized models such as the original Wav2Vec and HuBERT models.
This is due to the diverse nature of the AudioSet dataset used for pre-training, which includes a wide range of audio sources beyond speech.
## Limitations and Considerations
- The models are pre-trained on the full AudioSet dataset, which may not cover all possible audio domains comprehensively.
- Fine-tuning on domain-specific data may be necessary to achieve optimal performance for certain tasks.
- Computational resources may be required for deploying and fine-tuning these models, especially the larger variants.
## Citation
If you use these pre-trained models in your work, please cite the following
```bib
@article{ARCH,
title={Benchmarking Representations for Speech, Music, and Acoustic Events},
author={La Quatra, Moreno and Koudounas, Alkis and Vaiani, Lorenzo and Baralis, Elena and Garza, Paolo and Cagliero, Luca and Siniscalchi, Sabato Marco},
year={2024},
booktitle={2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)},
}
```
[arXiv version: arxiv.org/abs/2405.00934](arxiv.org/abs/2405.00934) |
TheBloke/alfred-40B-1023-GGUF | TheBloke | "2023-11-17T15:47:52Z" | 880 | 4 | transformers | [
"transformers",
"gguf",
"falcon",
"falcon-40b",
"long-context",
"NTK-YaRN",
"en",
"fr",
"de",
"es",
"it",
"dataset:OpenAssistant/oasst1",
"dataset:ehartford/dolphin",
"dataset:tau/sled",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2306.15595",
"arxiv:2309.00071",
"arxiv:2307.03172",
"base_model:lightonai/alfred-40b-1023",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | "2023-11-17T15:26:43Z" | ---
base_model: lightonai/alfred-40b-1023
datasets:
- OpenAssistant/oasst1
- ehartford/dolphin
- tau/sled
- tiiuae/falcon-refinedweb
inference: false
language:
- en
- fr
- de
- es
- it
license: apache-2.0
model_creator: LightOn AI
model_name: Alfred 40B 1023
model_type: falcon
prompt_template: '<start_system>You are Alfred, a helpful assistant trained by LightOn.
Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>{prompt}<end_message><start_assistant>
'
quantized_by: TheBloke
tags:
- falcon-40b
- long-context
- falcon
- NTK-YaRN
thumbnail: images/alfred-40b-1023.png
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Alfred 40B 1023 - GGUF
- Model creator: [LightOn AI](https://huggingface.co/lightonai)
- Original model: [Alfred 40B 1023](https://huggingface.co/lightonai/alfred-40b-1023)
<!-- description start -->
## Description
This repo contains GGUF format model files for [LightOn AI's Alfred 40B 1023](https://huggingface.co/lightonai/alfred-40b-1023).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/alfred-40B-1023-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/alfred-40B-1023-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF)
* [LightOn AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lightonai/alfred-40b-1023)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alfred
```
<start_system>You are Alfred, a helpful assistant trained by LightOn. Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>{prompt}<end_message><start_assistant>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [alfred-40b-1023.Q2_K.gguf](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF/blob/main/alfred-40b-1023.Q2_K.gguf) | Q2_K | 2 | 17.40 GB| 19.90 GB | smallest, significant quality loss - not recommended for most purposes |
| [alfred-40b-1023.Q3_K_S.gguf](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF/blob/main/alfred-40b-1023.Q3_K_S.gguf) | Q3_K_S | 3 | 18.32 GB| 20.82 GB | very small, high quality loss |
| [alfred-40b-1023.Q3_K_M.gguf](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF/blob/main/alfred-40b-1023.Q3_K_M.gguf) | Q3_K_M | 3 | 20.06 GB| 22.56 GB | very small, high quality loss |
| [alfred-40b-1023.Q3_K_L.gguf](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF/blob/main/alfred-40b-1023.Q3_K_L.gguf) | Q3_K_L | 3 | 21.60 GB| 24.10 GB | small, substantial quality loss |
| [alfred-40b-1023.Q4_0.gguf](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF/blob/main/alfred-40b-1023.Q4_0.gguf) | Q4_0 | 4 | 23.81 GB| 26.31 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [alfred-40b-1023.Q4_K_S.gguf](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF/blob/main/alfred-40b-1023.Q4_K_S.gguf) | Q4_K_S | 4 | 23.81 GB| 26.31 GB | small, greater quality loss |
| [alfred-40b-1023.Q4_K_M.gguf](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF/blob/main/alfred-40b-1023.Q4_K_M.gguf) | Q4_K_M | 4 | 25.45 GB| 27.95 GB | medium, balanced quality - recommended |
| [alfred-40b-1023.Q5_0.gguf](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF/blob/main/alfred-40b-1023.Q5_0.gguf) | Q5_0 | 5 | 28.97 GB| 31.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [alfred-40b-1023.Q5_K_S.gguf](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF/blob/main/alfred-40b-1023.Q5_K_S.gguf) | Q5_K_S | 5 | 28.97 GB| 31.47 GB | large, low quality loss - recommended |
| [alfred-40b-1023.Q5_K_M.gguf](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF/blob/main/alfred-40b-1023.Q5_K_M.gguf) | Q5_K_M | 5 | 30.64 GB| 33.14 GB | large, very low quality loss - recommended |
| [alfred-40b-1023.Q6_K.gguf](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF/blob/main/alfred-40b-1023.Q6_K.gguf) | Q6_K | 6 | 34.46 GB| 36.96 GB | very large, extremely low quality loss |
| [alfred-40b-1023.Q8_0.gguf](https://huggingface.co/TheBloke/alfred-40B-1023-GGUF/blob/main/alfred-40b-1023.Q8_0.gguf) | Q8_0 | 8 | 44.46 GB| 46.96 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/alfred-40B-1023-GGUF and below it, a specific filename to download, such as: alfred-40b-1023.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/alfred-40B-1023-GGUF alfred-40b-1023.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/alfred-40B-1023-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/alfred-40B-1023-GGUF alfred-40b-1023.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m alfred-40b-1023.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<start_system>You are Alfred, a helpful assistant trained by LightOn. Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>{prompt}<end_message><start_assistant>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/alfred-40B-1023-GGUF", model_file="alfred-40b-1023.Q4_K_M.gguf", model_type="falcon", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้ฟๆ, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: LightOn AI's Alfred 40B 1023
# Model Card for Alfred-40B-1023

`Alfred-40B-1023` is a finetuned version of [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b), with an **extended context length of 8192 tokens**.
Finetuning was performed in October 2023. `Alfred-40B-1023` is made available under the Apache 2.0 License.
## Model Details
### Model Description
- **Developed by:** [LightOn](https://www.lighton.ai/)
* [Oskar Hallstrรถm](https://huggingface.co/ohallstrom) (project lead, training & modeling, internal long context data, evaluation)
* [Amรฉlie Chatelain](https://huggingface.co/ameliechatelain) (internal data & long context data, data generation)
* [Clรฉment Thiriet](https://huggingface.co/cthiriet) (data infrastructure, data generation, evaluation)
* [Julien Sรฉailles](https://huggingface.co/Jseailleslighton) (data generation)
* [Adrien Cavaillรจs](https://huggingface.co/adcavail) (data generation)
* [Axel Marmet](https://huggingface.co/WeightsnWizardry)* (training 2K baseline)
`*` work done while at LightOn
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **License:** Apache 2.0 license.
- **Finetuned from model:** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b)
- **Training date:** October 2023 (`1023`).
## Uses
### Direct Use
`Alfred-40B-1023` can be used as a chat model or as an instruct model.
For both instruct and chat mode, the model has been trained with chat tokens `<start_system>`, `<start_user>`, `<start_assistant>`, and `<end_message>`. These can be integrated into the prompt in the follwoing way:
```
<start_system>You are Alfred, a helpful assistant trained by LightOn. Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>{user query}<end_message><start_assistant>
```
The stop word `<end_message>` should be used.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
`Alfred-40B-1023` is a finetune of Falcon-40B. As such, it is trained mostly on English, German, Spanish, French, with limited capabilities also in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of `Alfred-40B-1023` to implement appropriate guardrails and precautions in any production use.
## How to Get Started with the Model
Use the code below to get started with the model.
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "lightonai/alfred-40b-1023"
tokenizer = AutoTokenizer.from_pretrained("lightonai/alfred-0923-tokenizer")
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"<start_system>You are Alfred, a helpful assistant trained by LightOn. Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>Write me an email to my boss, explaining how the company could benefit by using LightOns platform for Large Language Models, Paradigm.<end_message><start_assistant>",
max_length=1000,
do_sample=True,
top_k=3,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Alfred-40B-1023 was trained on a mixture of publicly available and in-house curated datasets. The training data is composed of 50 % short context tasks, 45 % long context tasks and 5 % RefinedWeb.
| **Short context sources** |
|--------------------|
| [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) |
| [dolphin](https://huggingface.co/ehartford/dolphin) |
| [openai-critiques](https://openaipublic.blob.core.windows.net/critiques/README.md) |
| internal |
`internal` is a collection of synthetic and human-generated datasets created by Ligthon, tailored towards the use cases of our clients.
| **Long context sources** |
|--------------------|
| [sled](https://huggingface.co/datasets/tau/sled) |
| internal-long-context |
`internal-long-context` is a collection of synthetic datasets generated by LightOn, tailored towards the use cases of our clients.
During training, we apply regular language modeling loss for a partition of the prompts in the long context data.
| **Pretraining objective source** |
|--------------------|
| [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) |
### Training Procedure
`Alfred-40B-1023` was trained on 128 A100 40GB GPUs, using a 3D parallelism strategy (TP=8, PP=2, DP=8) combined with ZeRO. Alfred has been trained through supervised finetuning on 100 megatokens, with a learning rate decayed with a cosine schedule.
#### Preprocessing
All datasets have been filtered, up or downsampled, and adapted to our chat token format.
#### Context length extension
We extend the context length to 8K with a custom method that we name NTK-YaRN. As guessable from its name, our extension method draws inspiration from NTK-aware interpolation and YaRN.
During our context length extension efforts, we experimented with various methods suitable for RoPE embeddings. These include vanilla [positional interpolation](https://arxiv.org/abs/2306.15595), [NTK-aware interpolation](https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/), [NTK-by-parts](https://github.com/jquesnelle/scaled-rope/pull/1), and lastly [YaRN](https://arxiv.org/abs/2309.00071).
YaRN looked very promising when applied at test-time, however finetuning with YaRN was not successful in our experiments. When extending the context length at training-time, NTK-aware interpolation was the most successful out of the already existing methods. Some of our results from trying different long context extension methods are shared in the Evaluation section below. We acknowledge that the same parameter values as proposed in the YaRN-paper have been used in our YaRN experiments, and that these potentially could have other optimal values for our particular setup.
##### NTK-YaRN
Similarly to NTK-aware interpolation (`NTK`), NTK-YaRN involves increasing the base of the RoPE embeddings. In the original implementation of NTK-aware interpolation the new base `b'` is adapted according to the following formula:
$$ b' = b \times s^{\frac{|D|}{|D|-2}} $$
where `b` is the original base, `s` the scaling factor of the context length, and `|D|` the model's head dimension.
However, we find (similar to other actors) that increasing the base slightly more is even better. The value of `b'` could probably be optimized even further, but for these experiments we have settled with the following value:
$$ b' = b \times (s+1)^{\frac{|D|}{|D|-2}} $$
In the following parts of this model card, context length extension with this extended scaling of the base is referred to as `NTK-Margin`. For `NTK-YaRN`, the extended scaling of the base is combined with the modification of the computation of the attention weights made in YaRN, where the query and key matrices are scaled by the factor `m`.
$$ m = 1 + 0.1 \times \log(s) $$
Scaling the query and key matrices this way substantially reduces the initial grad norm when applying a context length extension method in our training runs.
To cite NTK-YaRN, please refer to the model bibtex in the bottom of this model card.
## Evaluation
### Context length extension strategies
#### Training losses
After experimenting on a 7B scale, we finally run a selected partition of the extension methods on a 40B scale. In the figure below, we display the resulting training losses when training a 40B model with the different extension methods, ceteris paribus.

Initially, YaRN has the lowest training loss, which can be seen as a reflection of the fact that YaRN was the most successful extension method at test time. However all the other methods surpasse YaRN in terms of training loss already after a handful of megatokens. Comparing NTK-Margin vs NTK-YaRN, we can note that the scaling of Q and K matrices makes the training loss lower in the beginning, however NTK-YaRN's advantage over NTK-Margin decreases as the training goes on. Comparing NTK-Margin with NTK in turn, it seems like the larger value of the base in NTK-Margin gives an initial boost in training loss, however this advantage decreases as training goes on.
#### Performance on Long Context Benchmarks
We evaluate the context length extension methods on an own benchmark, consisting of four tasks.
* [Key-value retrieval UUID](https://arxiv.org/pdf/2307.03172.pdf)
* [Coarse-grained Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/)
* [Fine-grained Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/)
* [Multi document retrieval data](https://nlp.stanford.edu/data/nfliu/lost-in-the-middle/nq-open-contriever-msmarco-retrieved-documents.jsonl.gz)
For each task, we have created 3 subtasks - one for each of the three context lengths 2K, 4K and 8K. In total, we thus have 12 subtasks.
In order to get an aggregated score that values each subtask equally, we normalize the scores for each subtask and then calculate the mean of the normalized scores for each extension method.

On these benchmarks, YaRN clearly lags behind. NTK-YaRN is the winning method, however NTK-Margin is so close that more extensive research is needed to verify that NTK-YaRN really is superior to NTK-Margin, especially when trained for longer.
### Comparison to 2K baseline
In order to track any potential degradation on 2K context tasks due to the context length extension, we compare our 8K model against a 2K model trained in a similar setup for 100 megatokens. When training the 2K baseline, we don't include any long context data.
We conduct the comparison by evaluating the models on a selection of tasks from EleutherAI harness, as well as ranking model outputs internally.

Notably, our 8K model not only performs on par with our 2K model on most of our EleutherAI harness tasks, in fact it outperforms the 2K model on a majority of the tasks. Reading comprehension is the only subcategory for which our 8K model is outperformed by the 2K model.
We recognize that there is a discrepancy between performance on classical NLP benchmarks and how humans perceive the model quality. When model outputs (limited to 2K context lengths) are ranked by LightOn employees internally, the 2K and 8K have strikingly similar performance. However, a few rare failure modes have been noted for the 8K version, which are not seen when using the 2K model. These failure modes are likely to be fixable with better composition of the long context data.
## Compute Infrastructure
### Hardware
Alfred-40B-1023 was trained on AWS SageMaker, on 128 A100 40GB GPUs in P4d instances.
### Software
Alfred-40B-1023 was trained with a custom codebase. Training leverages a 3D parallelism approach combined with ZeRO, as well as high-performance kernels such as FlashAttention.
## Model Card Contact
Please open a Community Discussion for any support request related to using Alfred with HuggingFace transformers.
For any other inquiry: [email protected]
## Citation
If you find the model useful in your work, please use the following bibtex when citing.
```
@article{alfred-40b-1023,
title={Alfred-40B-1023},
author={Hallstrรถm, Oskar and Chatelain, Amรฉlie and Thiriet, Clรฉment and Sรฉailles, Julien and Cavaillรจs, Adrien and Marmet, Axel},
year={2023}
}
```
<!-- original-model-card end -->
|
fatgong/5DF38VhmtVBdkbuFQgMpZtjGEgve4gepMmkYwhUof6pC7Ves_vgg | fatgong | "2024-03-19T00:02:47Z" | 880 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-03-09T14:12:02Z" | Entry not found |
mlx-community/WizardLM-2-8x22B-4bit | mlx-community | "2024-04-16T11:57:15Z" | 880 | 9 | mlx | [
"mlx",
"safetensors",
"mixtral",
"license:apache-2.0",
"region:us"
] | null | "2024-04-15T18:05:31Z" | ---
license: apache-2.0
tags:
- mlx
---
# mlx-community/WizardLM-2-8x22B-4bit
This model was converted to MLX format from [`prince-canuma/WizardLM-2-8x22B`]() using mlx-lm version **0.9.0**.
Model added by [Prince Canuma](https://twitter.com/Prince_Canuma).
Refer to the [original model card](https://huggingface.co/prince-canuma/WizardLM-2-8x22B) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/WizardLM-2-8x22B-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
amara16/distilbert-extractive-qa-large-project | amara16 | "2024-05-10T21:02:36Z" | 880 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-05-10T20:50:13Z" | ---
library_name: transformers
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/manmaru_mix_pony-v1-sdxl | John6666 | "2024-05-26T23:28:54Z" | 880 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-05-26T23:22:29Z" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
---
Original model is [here](https://civitai.com/models/466307?modelVersionId=518886).
|
mradermacher/Mahou-1.3-llama3-8B-i1-GGUF | mradermacher | "2024-05-30T04:41:16Z" | 880 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:flammenai/MahouMix-v1",
"base_model:flammenai/Mahou-1.3-llama3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-05-29T09:11:46Z" | ---
base_model: flammenai/Mahou-1.3-llama3-8B
datasets:
- flammenai/MahouMix-v1
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/flammenai/Mahou-1.3-llama3-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-llama3-8B-i1-GGUF/resolve/main/Mahou-1.3-llama3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
tau/splinter-base-qass | tau | "2021-09-03T08:47:00Z" | 879 | 1 | transformers | [
"transformers",
"pytorch",
"splinter",
"question-answering",
"SplinterModel",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | ---
language: en
tags:
- splinter
- SplinterModel
license: apache-2.0
---
# Splinter base model (with pretrained QASS-layer weights)
Splinter-base is the pretrained model discussed in the paper [Few-Shot Question Answering by Pretraining Span Selection](https://aclanthology.org/2021.acl-long.239/) (at ACL 2021). Its original repository can be found [here](https://github.com/oriram/splinter). The model is case-sensitive.
Note: This model **does** contain the pretrained weights for the QASS layer (see paper for details). For the model **without** those weights, see [tau/splinter-base](https://huggingface.co/tau/splinter-base).
## Model description
Splinter is a model that is pretrained in a self-supervised fashion for few-shot question answering. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Recurring Span Selection (RSS) objective, which emulates the span selection process involved in extractive question answering. Given a text, clusters of recurring spans (n-grams that appear more than once in the text) are first identified. For each such cluster, all of its instances but one are replaced with a special `[QUESTION]` token, and the model should select the correct (i.e., unmasked) span for each masked one. The model also defines the Question-Aware Span selection (QASS) layer, which selects spans conditioned on a specific question (in order to perform multiple predictions).
## Intended uses & limitations
The prime use for this model is few-shot extractive QA.
## Pretraining
The model was pretrained on a v3-8 TPU for 2.4M steps. The training data is based on **Wikipedia** and **BookCorpus**. See the paper for more details.
### BibTeX entry and citation info
```bibtex
@inproceedings{ram-etal-2021-shot,
title = "Few-Shot Question Answering by Pretraining Span Selection",
author = "Ram, Ori and
Kirstain, Yuval and
Berant, Jonathan and
Globerson, Amir and
Levy, Omer",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.239",
doi = "10.18653/v1/2021.acl-long.239",
pages = "3066--3079",
}
```
|
mmnga/ELYZA-japanese-Llama-2-7b-gguf | mmnga | "2023-11-16T14:27:12Z" | 879 | 5 | null | [
"gguf",
"llama2",
"ja",
"arxiv:2307.09288",
"license:llama2",
"region:us"
] | null | "2023-08-29T06:32:01Z" | ---
license: llama2
language:
- ja
tags:
- llama2
---
# ELYZA-japanese-Llama-2-7b-gguf
[ELYZAใใใๅ
ฌ้ใใฆใใELYZA-japanese-Llama-2-7b](https://huggingface.co/ELYZA/ELYZA-japanese-Llama-2-7b)ใฎggufใใฉใผใใใๅคๆ็ใงใใ
ไปใฎใขใใซใฏใใกใ
้ๅธธ็: llama2ใซๆฅๆฌ่ชใฎใใผใฟใปใใใงๅญฆ็ฟใใใขใใซ
[mmnga/ELYZA-japanese-Llama-2-7b-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-gguf)
[mmnga/ELYZA-japanese-Llama-2-7b-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-instruct-gguf)
Fast็ ๆฅๆฌ่ชใฎ่ชๅฝใ่ฟฝๅ ใใฆใใผใฏใณใณในใใๆธใใใ1.8ๅ้ซ้ๅใใใขใใซ
[mmnga/ELYZA-japanese-Llama-2-7b-fast-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-fast-gguf)
[mmnga/ELYZA-japanese-Llama-2-7b-fast-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-fast-instruct-gguf)
Codellama็ GGUF
[mmnga/ELYZA-japanese-CodeLlama-7b-gguf](https://huggingface.co/mmnga/ELYZA-japanese-CodeLlama-7b-gguf)
[mmnga/ELYZA-japanese-CodeLlama-7b-instruct-gguf](https://huggingface.co/mmnga/ELYZA-japanese-CodeLlama-7b-instruct-gguf)
Codellama็ GPTQ
[mmnga/ELYZA-japanese-CodeLlama-7b-instruct-GPTQ-calib-ja-1k](https://huggingface.co/mmnga/ELYZA-japanese-CodeLlama-7b-instruct-GPTQ-calib-ja-1k)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'ELYZA-japanese-Llama-2-7b-q4_0.gguf' -n 256 -p '[INST] <<SYS>>ใใชใใฏ่ช ๅฎใงๅช็งใชๆฅๆฌไบบใฎใขใทในใฟใณใใงใใ<</SYS>>ใฏใใๆตท่พบใซ่กใฃใฆใขใถใฉใทใจๅ้ใซใชใใๆ็ต็ใซใฏๅฎถใซๅธฐใใจใใใใญใใใฎ็ญ็ทจๅฐ่ชฌใๆธใใฆใใ ใใใ [/INST]'
```
### Licence
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
### ๅผ็จ Citations
```tex
@misc{elyzallama2023,
title={ELYZA-japanese-Llama-2-7b},
url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b},
author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura},
year={2023},
}
```
```tex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Artefact2/Mega-Destroyer-8x7B-GGUF | Artefact2 | "2024-03-17T03:33:40Z" | 879 | 5 | null | [
"gguf",
"en",
"license:cc-by-4.0",
"region:us"
] | null | "2024-03-14T11:18:53Z" | ---
language:
- en
license: cc-by-4.0
---
These are GGUF quantized versions of [FoxEngineAi/Mega-Destroyer-8x7B](https://huggingface.co/FoxEngineAi/Mega-Destroyer-8x7B).
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`.
Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf`
* What quant do I need? See https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
* Want to steal my scripts? See https://gist.github.com/Artefact2/fd2254fc133906ac96b49b6947f0cd4a
* Quant requests? Just open a discussion in the community tabs. |
scribis/Fantastica-7b-Instruct-0.2-Italian-GGUF | scribis | "2024-05-06T13:16:46Z" | 879 | 0 | peft | [
"peft",
"gguf",
"Italian",
"GGUF",
"Mistral",
"finetuning",
"Text Generation",
"text-generation",
"it",
"dataset:scribis/Wikipedia_it_Trame_Romanzi",
"dataset:scribis/Wikipedia-it-Descrizioni-di-Dipinti",
"dataset:scribis/Wikipedia-it-Trame-di-Film",
"dataset:scribis/Corpus-Frasi-da-Opere-Letterarie",
"dataset:scribis/Wikipedia-it-Mitologia-Greca",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-04-12T23:22:34Z" | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- scribis/Wikipedia_it_Trame_Romanzi
- scribis/Wikipedia-it-Descrizioni-di-Dipinti
- scribis/Wikipedia-it-Trame-di-Film
- scribis/Corpus-Frasi-da-Opere-Letterarie
- scribis/Wikipedia-it-Mitologia-Greca
language:
- it
tags:
- Italian
- GGUF
- Mistral
- finetuning
- Text Generation
pipeline_tag: text-generation
library_name: peft
widget:
- text: "Descrivi un dipinto che abbia per soggetto il mito di Filemone e Bauci."
example_title: "Dipinto"
- text: "Scrivi la trama di un romanzo ambientato a Venezia nel XVi secolo."
example_title: "Romanzo"
---
Model Card for Fantastica-7b-Instruct-0.2-Italian-GGUF
# ๐ฎ๐น Fantastica-7b-Instruct-0.2-Italian-GGUF ๐ฎ๐น
Fantastica-7b-Instruct-0.2-Italian is an Italian speaking, instruction finetuned, Large Language model. ๐ฎ๐น
# Fantastica-7b-Instruct-0.2-Italian's peculiar features:
- Mistral-7B-Instruct-v0.2 as base.
- generation of texts in Italian according to the __style__ of 4 great authors of Italian literature. __Alessandro Manzoni__, __Carlo Dossi__, __Carlo Collodi__ and __Luigi Pirandello__.
- generation of novel plots in Italian.
- generation of descriptions of paintings in Italian.
- Trained on: selected entries from Wikipedia, corpus of sentences taken from the works of great Italian writers, and selected paragraphs of particular stylistic value from Alessandro Manzoni, Carlo Dossi, Carlo Collodi and Luigi Pirandello.
- Training took 70 hours with Google Colab A100.
- First version of a series of models dedicated to Italian literature.
# Notes on sources:
- the model makes use of literary works no longer covered by copyright, largely derived from liberliber.it corpus.
- the model uses entries selected from the free Wikipedia encyclopedia.
# PROMPT FORMAT
\<s\>[INST]{instruction}[/INST]
for few-shots prompt use:
\<s\>[INST]{instruction}[/INST]{response}[INST]{instruction2}[/INST]
# COLAB NOTEBOOKS
- only CPU (You have to wait a little longer to get the answers): https://colab.research.google.com/drive/1KGkLJXHZJuy3oOaj2cW16-VKSX21U8Pz?usp=sharing
- __with GPU__: https://colab.research.google.com/drive/1qBZaQu2Vt4i3aEFJQSesgI5pdfqvja7O?usp=sharing
# Examples:
## EXAMPLE 1 - Text generation in Italian in the style of {AUTHOR}
# INSTRUCTION
@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: Un viaggiatore giunge dapprima al Castello di Vindona dove viene accolto freddamente dai servitori. Il viaggiatore entra nella sala del trono e percorre un tappeto di pelle di drago. Il viaggiatore รจ presentato al signore del castello, uno strano tiranno sulla cui testa sta una corona fatta di scheletri di serpente.
# RESPONSE
# temperature:0.6 (Style imitation: low. Content consistency: medium)
Titolo: Arrivo al castello di Vindone: una sorpresa. Dall'alto d'un colle, un viaggiatore che fa la sua corsa a cavallo, dopo avere superato due guardiani della strada ferrata, entra in un piccolo villaggio, il quale, a vista, pare abbandonato. Egli vede le finestre di un castello che sembra dormire; e, nonostante non sia il giorno, entra in quel castello, attraversa un salotto e entra in una sala, la cui volta รจ fiorita di rose bianche. Sul tappeto di pelle di drago, si trova sdraiato un signore con la testa coperta da una corona fatta d'scheletri di serpente. ร lui. Accetta il viaggiatore; lo conduce nel proprio appartamento; lo mette a sedere accanto a sรฉ, e gli offre del vino. Il viaggiatore, per rispetto, non puรฒ rifiutare; e, dopo di aver bevuto, torna a sedersi. Il signore gli fa un breve discorso; e poi, lo fa salire in carrozza, e lo porta in una torre situata di fronte al castello. Il viaggiatore รจ molto sorpreso di quanto ha visto, ma non sa spiegarlo. E, di lรฌ a poco, si mette a parlare...
# temperature: 0.7 (Style imitation: medium; Content consistency: low)
Titolo: Viaggio nel regno dominato da mostri e spietati signori\n E lo portava su' braccia, come se fosse stato un neonato, il gran viaggiatore che arrivava alla rocciosa villa di Vindรฒna, chiusa tra due viali coperti in tutta la loro circonferenza da folte siepi di rosmarino con lassรน in cima ciliegie arrostite. La mattina dopo la sua entrata nel Regno dei Cipressi, egli si trovรฒ seduto sopra 'l throno', tra la poltrona d'oro dello Scalco di Ferro e i cavalieri dell''Armata Bianca', i quali facevan finta di non avvertenersene affatto; ma lui era consunto dall'impazienza di poter vedere in viso il suo ostinatissimo nemico, lo Stravagante, a cui non aveva mai risposto neppure all'offesa della parola, nรฉ aveva ricambiato mai un saluto, nรฉ gli s'era mai voltato alle spalle, ma sempre via fuga come uno sfuggito al terremoto. Quel giorno perรฒ, a occhi chiuse, si fece portare sul cimasuolo dal vecchio valletto smemorato degli usci e delle scale, senz'altro vestito dello stesso color del muro grigiastro della torre, il quale soltanto ai salti riusciva ad orientarsi nell'andito buio ai confini dell'anticamera, ove il Re dormiva coi suoi cavalloni sotto la coperta argento di luna. Appena riuscite nelle stanze superiori dell'anticamera, la barbarica presenza dell'intruso, col suo aspetto quasi gigantesco e il suo corpo enorme ingordito dal tempo, c
# TRY SETTING DIFFERENT PARAMETERS!
## EXAMPLE 2 - Text generation of imaginary paintings in Italian (Wikipedia style)
# INSTRUCTION (with the start of the response provided by the user)
\<s\>[INST]@Scribis @Italiano. Descrivi un dipinto.[/INST] ### Titolo: Macbeth di fronte alle streghe. ### Introduzione: Macbeth di fronte alle streghe รจ il titolo dell'opera pittorica di Gian Lorenzo Bernini realizzata nel 1645-1650 e conservato nella Galleria degli Uffizi a Firenze. ### Descrizione. La scena rappresenta il momento in cui Macbeth e Banquo si trovano davanti alle streghe che stanno recitando una formula magica mentre mescolano una pozione.
# RESPONSE
# temperature: 0.9
L'artista immagina l'uomo vestito di nero con la schiena reclinata, con le mani che non riescono a controllare i movimenti della moglie e il bambino: la sua atteggiamento รจ quindi un esempio del carattere instabile dei personaggi. La scena si svolge in una stanza illuminata da luce rossa, che simboleggia le emozioni di Macbeth. La composizione รจ molto dinamica e la figura di Lady Macbeth sembra essere spinta verso il suo marito dal mistero della magia, come se volesse afferrare con le mani lo stile dell'opera per dominarlo completamente. L'ambientazione della scena รจ ispirata alle vicende di Macbeth, in particolare al momento in cui il re si reca nel palazzo dei Banchetti e trova la moglie che sta bevendo una pozione magica. La stanza รจ ricca di oggetti d'arte, come un busto di Giulio Cesare, un ritratto di Tiberio e un dipinto con il martirio di sant'Agata: questi sono stati interpretati da alcuni studiosi come simboli della decadenza del potere dei re scozzesi. La stanza รจ inoltre illuminata dalla luce proveniente dai corpi delle streghe, che indicano un equilibrio fra le emozioni dei protagonisti e il loro carattere irresponsabile: la luce rappresenta infatti i momenti di scoperta della veritร e dell'innocenza. Il dipinto รจ anche ricco di dettagli decorativi, come lo stile del muro con una cornice in oro e un tappeto che simboleggia la forza del potere dei re: il tappeto infatti ha delle forme geometriche che si allungano verso l'esterno della scena. Il dipinto รจ anche ricco di dettagli naturalistici, come le foglie e i fiori presenti nella stanza, la testa del serpente che si trova sul tappeto e il pavimento in marmo bianco: queste sono state interpretate da alcuni studiosi come simboli della morte. Il dipinto รจ anche ricco di dettagli architettonici, come la scala a sinistra e la cisterna alla base dell'arcione centrale del pavimento, che indicano una realtร trasmessa tramite la tecnologia contemporanea, attraverso le rappresentazioni di un ambiente ricco e complesso...
# OTHER INTERESTING PROMPTS YOU CAN TRY TO GENERATE ITALIAN TEXTS:
# Prompt to generate texts in the style of Carlo Collodi
- @Scribis @Carlocollodi @italiano. Scrivi un racconto seguendo lo stile letterario vivace e pittoresco di Carlocollodi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.
# Prompt to generate sentences in the style of Luigi Pirandello
- @Scribis @Luigipirandello @italiano. Migliora lo stile della seguente frase seguendo lo stile letterario grottesco e pittoresco di Luigipirandello di Scribis: {tue indicazioni}.
# Prompt to generate texts in the style of Luigi Pirandello
- @Scribis @Luigipirandello @Italiano. Scrivi una scena nello stile letterario vivace e grottesco di Luigipirandello di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.
# Prompt to generate texts in the style of Carlo Dossi
- @Scribis @Carlodossi @Italiano. Scrivi una scena nello stile lettarario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.
# Prompt to generate texts in the style of Alessandro Manzoni
- @Scribis @Alessandromanzoni @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Alessandromanzoni di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.
# Prompt to generate Movie plots (Wikipedia style)
- @Scribis @Italiano. Raccontami la trama di un film di avventura o di un film storico.
- @Scribis @Italiano. Riassumi la storia di un film di avventura o di un film storico.
- @Scribis @Italiano. Esponi l'intreccio di un film di avventura o di un film storico.
- @Scribis @Italiano. Scrivi la trama di un film di avventura o di un film storico.
# Prompt to generate plots of novels (Wikipedia style)
- @Scribis @Italiano. Raccontami la trama di un romanzo.
- @Scribis @Italiano. Riassumi la storia di un libro.
- @Scribis @Italiano. Esponi l'intreccio di un'opera letteraria.
- @Scribis @Italiano. Scrivi la trama di un romanzo.
# Prompt to generate painting descriptions (Wikipedia style)
- @Scribis @Italiano. Il tuo compito รจ descrivere un dipinto.
- @Scribis @Italiano. Descrivi un dipinto.
- @Scribis @Italiano. Descrivi un celebre dipinto.
- @Scribis @Italiano. Descrivi un famoso dipinto.
# Prompt to generate a Greek mythology topic. (Wikipedia style)
- @Scribis @Italiano. Parlami di un argomento di mitologia greca.
- @Scribis @Italiano. Raccontami qualcosa riguardante la mitologia greca.
# More information coming soon to help you make the best use of the model
# Democratize AI!
## Limitations
- Fantastica-7b-Instruct-0.2-Italian may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.
- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.
## Model Author:
Fabio Martines <[email protected]> |
Vezora/Mistral-22B-v0.2 | Vezora | "2024-04-15T05:24:38Z" | 879 | 108 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-13T05:53:39Z" | ---
license: apache-2.0
---
<img src="https://huggingface.co/Vezora/Mistral-22B-v0.1/resolve/main/unsloth.png" width="100" height="150" />
### Mistral-22b-v.02 Release Announcement ๐
## This model is not an moe, it is infact a 22B parameter dense model!
**Date**: April 13
**Creator** [Nicolas Mejia-Petit](https://twitter.com/mejia_petit)
### Overview
- Just two days after our release of **Mistral-22b-v0.1**, we are excited to introduce our handcrafted experimental model, **Mistral-22b-v.02**. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion.
- v0.2 has trained on 8x more data than v0.1!
### Capabilities
- **Math Proficiency**: The model exhibits strong mathematical abilities. Dispite not being trained on math.
- **Better at Coding** The model is significantly better at coding, than V1, it passed some of my simple coding test, such as "Create a simple HTML site with a button that changes the background color to a random color", which V1 failed.
- **More Cohesive** This V2 model is significantly more cohesive, and better at aunderstanding the prompts and answering with the appopriate answer.
- **Highly Uncencored** Since this model was also Re-Alligned to be uncencored, it can just answer anything you ask. So use at your own risk, we take no responsibility for your generated responces.
- **Multi Turn** The dataset this model trained on was mostly all multi turn conversations, spanning many different topics, with some emphasis on coding.
- **Json Mode** I did train this model on answering in JSON and using JSON tools., I have yet to try it, in depth but preliminary test shows it works, including.
- **Agent abilities** I did train this model on agent datasets, that teach it to do real world tasks such as picking up an object, and even navigating a webpage based off HTML.
- **Good Chili Recipe** The model gives a good chili recipe :)
- **32k Sequence Length** This model was trained with a 32k sequence length.
- **GUANACO PROMPT FORMAT** YOU MUST USE THE GUANACO PROMPT FORMAT SHOWN BELLOW IN USAGE. Not using this prompt format will lead to sub optimal results.
### Experimental Nature
Please note that Mistral-22b is still in a WIP. v0.3 has started training now, with a different method than used before, this is to hopefully make the model more round in its internel knowlledge. Through my testing I found V2 to be a significant improvement over v.1.
### Upcoming Release: V.3
- v0.3 will feature a different base model for testing purposes, however this model is pretty darn good for a second test. :)
- I have done some preliminary results with my new v0.3 base model, and it appears to achieve a lower loss after the first epoch compared to the other base model used for v0.1 and v0.2. so we have started training v0.3 with the new base model and with the longer dataset, will be done and released in the next 48 hours. :)
### Stay Updated
**V.3**, coming soon! And is currently training, will be done in the next ~24 hours. ๐Paper Coming Soon๐
- There will be more of these 22b models. They 5-6 siblings till I find what the best results are for MOE compression.
- However I am very surprised at how good this V.2 model is, off my small testing.
- I will be releasing a blog post soon on how I did this, I still will release a paper with testing and results, but I'm gonna rush out a paper before hand so I can share how I did this. I'd just like to make sure the right people get the right credit for their work that I used, so I have to read up some and make sure everyone gets the credit they deserve, ( and I need quality sleep my entire sleep schedule has been abominated since mixtrals drop.) I appreciate your understanding.
- I have a bunch of other methods I have yet to try, and many of those methods required me making this model, and running the initial tests, so they are only going to get better from here, I appretiate feedback, thank you!
### Usage:
- This model requires a specific chat template, as the training format was Guanaco this is what it looks like:
- "### System: You are a helpful assistant. ### Human###: Give me the best chili recipe you can ###Assistant: Here is the best chili recipe..."
## Thank you!
- Thank you to [Daniel Han](https://twitter.com/danielhanchen), for Unsloth AI which was used to train this model. this led to a 2-3x speed increae and 2-3x decrease in memmory consumption.
- Thank you to [Charles Goddard](https://twitter.com/chargoddard), for providng me with a script that was nessary to make this model.
- Thank you to Mistral, for releasing Another Wonderful open source model, under Apache 2.0.
- Thank you to [Tim Dettmers](https://twitter.com/Tim_Dettmers), for creating QLora
- Thank you to [Tri Dao](https://twitter.com/tri_dao), for creating Flash Attention
- Thank you to Microsoft, for the Lora paper, and the Slice-GPT paper.
- Thank you to the Hugging Face team, for everything.โค๏ธ We really do appreciate you guys and all your hard work and commitment to the open source community!โค๏ธ
- Thank you to [Jon Durbin](https://x.com/jon_durbin?s=21) I used one of his DPO datasets converted to SFT, more info will be explained in paper.
## Future plans, train 4-5 more of these experimental models gather preliminary testing results, and then run evaluations on all the models I see have the best possibilities of excelling, then use the best one. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.