modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
shitshow123/tinylamma-20000 | shitshow123 | "2024-01-09T03:58:26Z" | 1,375 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-09T03:54:08Z" | ---
license: apache-2.0
---
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO.
train tinyllama1b-instruct for 20k DPO. |
QuantFactory/LLaMA-3-8B-SFR-Iterative-DPO-Concise-R-GGUF | QuantFactory | "2024-06-19T11:42:27Z" | 1,375 | 1 | null | [
"gguf",
"text-generation",
"base_model:Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-Concise-R",
"license:llama3",
"region:us"
] | text-generation | "2024-06-19T08:30:46Z" | ---
license: llama3
pipeline_tag: text-generation
base_model: Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-Concise-R
---
# LLaMA-3-8B-SFR-Iterative-DPO-Concise-R-GGUF
This is quantized version of [Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-Concise-R](https://huggingface.co/Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-Concise-R) created using llama.cpp
# Model Description
This is a concise version of Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R. In the training, a concise penalty is applied. |
NikolayKozloff/Gromenauer-7B-Instruct-Q8_0-GGUF | NikolayKozloff | "2024-06-24T13:47:01Z" | 1,375 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"es",
"ca",
"dataset:bertin-project/bonanza-hf",
"dataset:bertin-project/zenobia-instruct-hf",
"base_model:bertin-project/Gromenauer-7B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-06-24T13:46:29Z" | ---
base_model: bertin-project/Gromenauer-7B-Instruct
datasets:
- bertin-project/bonanza-hf
- bertin-project/zenobia-instruct-hf
language:
- es
- ca
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Gromenauer-7B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`bertin-project/Gromenauer-7B-Instruct`](https://huggingface.co/bertin-project/Gromenauer-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bertin-project/Gromenauer-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Gromenauer-7B-Instruct-Q8_0-GGUF --hf-file gromenauer-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Gromenauer-7B-Instruct-Q8_0-GGUF --hf-file gromenauer-7b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Gromenauer-7B-Instruct-Q8_0-GGUF --hf-file gromenauer-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Gromenauer-7B-Instruct-Q8_0-GGUF --hf-file gromenauer-7b-instruct-q8_0.gguf -c 2048
```
|
tihomirnitro/GGUF_pathsocial_16_features | tihomirnitro | "2024-06-24T14:27:20Z" | 1,375 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-24T14:19:28Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** tihomirnitro
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/mala-anime-mix-nsfw-pony-xl-v5new-sdxl-spo | John6666 | "2024-06-30T22:12:42Z" | 1,375 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"SPO",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-30T22:08:19Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
- SPO
---
Original model is [here](https://civitai.com/models/442163?modelVersionId=609753).
|
heegyu/kogpt-j-base | heegyu | "2023-03-05T08:30:14Z" | 1,374 | 5 | transformers | [
"transformers",
"pytorch",
"jax",
"gptj",
"text-generation",
"ko",
"dataset:heegyu/korean-petitions",
"dataset:heegyu/namuwiki-extracted",
"dataset:heegyu/kowikitext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-11-22T02:21:12Z" | ---
license: mit
widget:
- text: ์ค๋ ์์นจ ์ ๋ถ๋ ๋ฐํ๋ฅผ ํตํด
- text: |
์ ๋ฐฐ๊ณ ํ๋ค
datasets:
- heegyu/korean-petitions
- heegyu/namuwiki-extracted
- heegyu/kowikitext
language:
- ko
pipeline_tag: text-generation
---
## ๋ชจ๋ธ ๊ตฌ์ฑ
- GPT-J(Flax, Pytorch)
- 12 Layers, 768 hidden dim, 3072 intermediate, 12 heads, 51200 vocab size
- 1024 max_seq_len
- ํ๋ผ๋ฏธํฐ ์: 163M
### ์ฑ๋ฅ ๋ฒค์น๋งํฌ
<img src="https://github.com/HeegyuKim/language-model/blob/63d8bd7cd39f25e87e0e376cdd18df3f8b460dee/image/benchmark0304.png?raw=true" />
## ํ์ต ํ๊ฒฝ ๋ฐ ํ์ดํผํ๋ผ๋ฏธํฐ
- TPU V2-8
- Learning Rate: 6e-4, Batch Size: 512(=64 accum x 8 devices), Scheduler: Linear, WarmUp: 1000 step
- Optimizer: AdamW(adam_beta1=0.9 adam_beta2=0.98, weight_decay=0.01)
- Training Steps: 43247 (3 epoch)
- ํ์ต ํ ํฐ ์: 21.11B (43247 * 512 * 1024seq / 1024^3)
- ํ์ต ๊ธฐ๊ฐ: 2023/2/16 ~ 2023/2/18(2์ผ 22์๊ฐ ์์)
- ํ์ต ์ฝ๋: https://github.com/HeegyuKim/language-model
## ํ์ต์ ์ฌ์ฉํ ๋ฐ์ดํฐ
- AIHub SNS ๋ํ(730MB)
- AIHub ๊ตฌ์ด์ฒด(422MB)
- AIHub ๋์(1.6MB)
- AIHub ๋๊ท๋ชจ ์น๋ฐ์ดํฐ ๊ธฐ๋ฐ ํ๊ตญ์ด ๋ง๋ญ์น(12GB)
- ํ๊ตญ์ด ์ํค(867MB)
- ๋๋ฌด์ํค(6.4GB)
- ๊ตญ๋ฆฝ๊ตญ์ด์ ๋ฉ์ ์ ๋ํ(21MB)
- ๊ตญ๋ฆฝ๊ตญ์ด์ ์ผ์๋ํ ๋ง๋ญ์น(23MB)
- ๊ตญ๋ฆฝ๊ตญ์ด์ ๋ฌธ์ด ๋ง๋ญ์น(3.2GB)
- ๊ตญ๋ฆฝ๊ตญ์ด์ ๊ตฌ์ด ๋ง๋ญ์น(1.1GB)
- ๊ตญ๋ฆฝ๊ตญ์ด์ ์ ๋ฌธ ๋ง๋ญ์น(~2022, 17GB)
- ์ฒญ์๋ ๊ตญ๋ฏผ์ฒญ์(525MB)
๋ฐ์ดํฐ์
ํฌ๊ธฐ๋ ์ ์ฒ๋ฆฌํ jsonlํ์ผ์ ๊ธฐ์ค์ผ๋ก ํจ.
์ด ํ ํฐ ์๋ ์ฝ 7B์
## ์ฌ์ฉ ์์
```python
from transformers import pipeline
model_name = "heegyu/kogpt-j-base"
pipe = pipeline('text-generation', model=model_name)
print(pipe("์๋
ํ์ธ์", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128))
print(pipe("์ค๋ ์ ๋ถ ๋ฐํ์ ๋ฐ๋ฅด๋ฉด, ", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128))
print(pipe("์ธ๋ํ๋ค. ๊ฐ์ด์ ๋น์๊ฐ ๋ ์์ ๊ฝํ๋ค. ", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128, min_length=64))
```
๊ฒฐ๊ณผ
```bash
[{'generated_text': "์๋
ํ์ธ์'๋ผ๋ ์ ๋ชฉ์ ์์์์ (์ด๋ฆ)๋ '10๋
, ํ ๋ฌ ์ ์ฏค kbs 9์ ๋ด์ค์ 11์์ ๋ด์ค๋ฅผ ํ๋ฉด์ '์์นจ์๋ ์ด๋ค ๋ด์ฉ์ด๊ณ ์์นจ์ด ์ ์ ๋๋', '๋ด๊ฐ ๋ญ ์๋ชปํ๋๋' ์ด๋ฐ ์ด์ผ๊ธฐ๋ฅผ ํ๋ค. ๊ทธ๋ฐ๋ฐ kbs '๋ด์ค9'์ ๋์์ ๋ฐฉ์ก๊ตญ์ ์ฌ๊ณผํด์ผ ํ๋ ๊ฒ ์๋๋๊ณ ํ๋๋ผ'๊ณ ๋งํ๋ค.. ์ด์ด'sns ํ๋์ ํ๋ฉด์ ์ด๋ฐ ์ง๋ฌธ์ ํ ์ ๋๋ก (์ง์ ์ฑ ์๋ ์ฌ๋์ด๋ผ๋ฉด ๋ฐฉ์ก์ ๊ทธ๋ง ๋์ด์ผ ํ๋ค)'๋ฉฐ '๋น์์๋ '๋๋ ๋ฐฉ์ก์ธ์ด ๋๋๋ฐ ์ ๋์๊ฒ ์ด๋ฐ ์ง๋ฌธ๊ณผ ๋นํ์ ํ๋๊ฐ'๋ผ๊ณ ์๊ฐํ๊ณ , ๋ด๊ฐ ๋ฌด์จ ์๋ชป์ ํ๋ค๊ณ ์ด๋ ๊ฒ ์ด์ผ๊ธฐํ๋ ํ๋ค. ๊ทธ๋ฌ๋ ์ง๊ธ ์๊ฐํด๋ณด๋ฉด ๊ทธ๋ ์ ๊ฐ"}]
[{'generated_text': '์ค๋ ์ ๋ถ ๋ฐํ์ ๋ฐ๋ฅด๋ฉด, ็พ ์ ์กฐ์
, ๋ด์ ๋ชจ๋ ํ๋ฝํ๋ค.๋ฏธ ์๋ฌด๋ถ๋ ์ง๋๋ฌ ๋ฏธ๊ตญ์ ์ ์กฐ์
์์ฃผ๊ฐ ์ ์๋๋น 0.5% ๊ฐ์ํ๋ค๊ณ 22์ผ ๋ฐํ๋ค. ์ด๋ ์ง์ ์๊ณผ ๊ฐ์ ์์น๋ค. ์๋น์ค์
์ง์๋ 0.3% ํํดํ๋ฉฐ ์ฆ๊ฐ์ธ๋ฅผ ๊ฒฌ์ธํ๊ณ ์๋ค. ๋ฏธ๊ตญ ์ ์กฐ์
์์ฃผ๋ ์ ์์ ๋นํด 1ํฌ์ธํธ ์ ํํ ์์ค์ด๋ค.๋ฏธ์ค ๋ฌด์ญ์ ์์ ๋ฐ๋ฅธ ๋ถ์๊ฐ์ด ์ง์๋๋ฉด์ ๋ด์์ฆ์๊ฐ 6์ฃผ ์ฐ์ ํ๋ฝํ๋ฉด์ ํฌ์ ์ฌ๋ฆฌ๋ฅผ ์
ํ์์ผฐ๋ค.๋ฏธ๊ตญ ์ฐ๋ฐฉ์ค๋น์ ๋(์ฐ์ค)์ 9ยท12 ์์ ์ํ(qe) ๊ฒฐ์ ์ผ๋ก 6๋
๋ฐ ๋ง์ ์ฌ์ ์ต๋๋ก ๋์ด๋ ์๊ธ์ด ์์ฅ์ ์ ์
๋๊ณ ์๋ค๋ ๋ถ์์ด๋ค'}]
[{'generated_text': "์ธ๋ํ๋ค. ๊ฐ์ด์ ๋น์๊ฐ ๋ ์์ ๊ฝํ๋ค. ๆๅ่
, ์ด๊ฒ์ด ๋ฐ๋ก ํ๋น์๋ค, ๊ทธ๋์ ์ฉ์ฌ๋คใ\nใๅคชๅนณๅพก่ฆฝใ ใ็ญ่ฉๆผ่ญ๏คๅฏๆธ่จไบๅคงๆ(ๅคชๅนณๆ่ฆฝ)ใ.\n'์ํ์๋ฉ์(Triasmehiโฆ)'๋ผ๊ณ ํ๋ ์ด ๋ง์ ์ผ๋ณธ์ด 'ํ์ธ ๋คํ
ํค'(ใใผใใจใญใซใน)์ ๋ ๊ฐ๊น์ด ๋ง์ด๋ค. ์ผ๋ณธ์ ์ญ์ฌํ์ ๋ค์นด์ผ๋ง ์ธ์ด์๊ฐ ์ด ใํ ์ฟ ์ด ํ
๋ฃจ์ค๋ฏธ(ๅพณไบๅฅฅ็พฉไธ)ใ์์ '๋์ '"}]
```
## ์ฃผ์์ฌํญ
์ด ๋ชจ๋ธ์ ํ์ต ๋ฐ์ดํฐ๋ ๊ฐ์ข
์ฐจ๋ณ/ํ์ค ๋ฐ์ดํฐ๊ฐ ํฌํจ๋์ ์ ์์ผ๋ฉฐ, ๋ณ๋ก์ ์ ๊ฑฐ์์
์ ์งํํ์ง ์์์ต๋๋ค. ๋ฐ๋ผ์ ๋ชจ๋ธ์ด ์์ฑํ๋ ๋ฌธ์ฅ์ ํน์ ์ธ๋ฌผ์ด๋ ์ธ์ข
, ์ฑ๋ณ, ์ฅ์ ์ ๋ฐ๋ฅธ ์ฐจ๋ณ/ํ์ค๋ฐ์ธ์ ์์ฑํ ์ ์์ต๋๋ค.
|
quantumaikr/llama-2-70b-fb16-guanaco-1k | quantumaikr | "2023-08-06T20:35:45Z" | 1,374 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-06T19:54:02Z" | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
# quantumaikr/llama-2-70b-fb16-guanaco-1k
## Model Description
`quantumaikr/llama-2-70b-fb16-guanaco-1k` is a Llama2 70B model finetuned on an guanaco, mlabonne/guanaco-llama2-1k Dataset
## Usage
Start chatting with `quantumaikr/llama-2-70b-fb16-guanaco-1k` using the following code snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("quantumaikr/llama-2-70b-fb16-guanaco-1k")
model = AutoModelForCausalLM.from_pretrained("quantumaikr/llama-2-70b-fb16-guanaco-1k", torch_dtype=torch.float16, device_map="auto")
system_prompt = "### System:\nYou are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
message = "Write me a poem please"
prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
QuantumLM should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant
The output of QuantumLM
```
## Use and Limitations
### Intended Use
These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
### Limitations and bias
Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.
Contact us : [email protected] |
ajibawa-2023/Python-Code-13B | ajibawa-2023 | "2023-11-20T16:38:23Z" | 1,374 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"en",
"dataset:ajibawa-2023/Python-Code-23k-ShareGPT",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-11T10:48:22Z" | ---
license: cc-by-nc-nd-4.0
datasets:
- ajibawa-2023/Python-Code-23k-ShareGPT
language:
- en
tags:
- code
---
**Python-Code-13B**
Large Language Models (LLMs) are good with code generations. Sometimes LLMs do make mistakes in code generation. How about if they can give detailed explanation along with the code.
This is what I have tried over here. The base Llama-2 model was used for training purpose. It is trained on around 23000+ set of codes. Each set having 2 conversations.
This data was generated using GPT-3.5, GPT-4 etc. This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation.
I have released the [data](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT).
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 13 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
This is a full fine tuned model. Links for quantized models are given below.
**GPTQ GGML & AWQ**
GPTQ: [Link](https://huggingface.co/TheBloke/Python-Code-13B-GPTQ)
GGUF: [Link](https://huggingface.co/TheBloke/Python-Code-13B-GGUF)
AWQ: [Link](https://huggingface.co/TheBloke/Python-Code-13B-AWQ)
**Example Prompt:**
```
This is a conversation with your helpful AI assistant. AI assistant can generate Python Code along with necessary explanation.
Context
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__Python-Code-13B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 47.16 |
| ARC (25-shot) | 58.79 |
| HellaSwag (10-shot) | 81.66 |
| MMLU (5-shot) | 54.78 |
| TruthfulQA (0-shot) | 42.83 |
| Winogrande (5-shot) | 74.03 |
| GSM8K (5-shot) | 9.55 |
| DROP (3-shot) | 8.5 |
|
martyn/llama2-megamerge-dare-13b-v2 | martyn | "2023-12-19T17:14:50Z" | 1,374 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"dare",
"super mario merge",
"llama-2",
"merge",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-17T05:38:20Z" | ---
license: llama2
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- dare
- super mario merge
- pytorch
- llama
- llama-2
- merge
---
# llama 13b mega merge v2
The following were merged with DARE using [https://github.com/martyn/safetensors-merge-supermario](https://github.com/martyn/safetensors-merge-supermario)
## Mergelist
```
meta-llama/Llama-2-13b-hf
ajibawa-2023/Code-13B
ajibawa-2023/Python-Code-13B
meta-math/MetaMath-13B-V1.0
rombodawg/LosslessMegaCoder-llama2-13b-mini
NousResearch/Nous-Hermes-Llama2-13b
allenai/digital-socrates-13b
migtissera/Synthia-13B
Gryphe/MythoLogic-L2-13b
allenai/tulu-2-dpo-13b
FPHam/Free_Sydney_13b_HF
FPHam/Free_Sydney_V2_13b_HF
FPHam/Sydney_Overthinker_13b_HF
KoboldAI/LLaMA2-13B-Psyfighter2
Undi95/Unholy-v1-12L-13B
athirdpath/Eileithyia-13B
athirdpath/Orca-2-13b-Alpaca-Uncensored
```
## Merge command
```
python hf_merge.py mergelist.txt 13b-merge-v2 -p 0.11 -lambda 2.1
```
## Notes
* seems to generalize instruct styles
* `p` and `lambda` are still a guess |
PistachioAlt/Noromaid-Bagel-7B-Slerp | PistachioAlt | "2024-01-12T08:17:17Z" | 1,374 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-25T08:16:48Z" | ---
license: cc-by-nc-4.0
tags:
- merge
---
```yaml
slices:
- sources:
- model: jondurbin/bagel-dpo-7b-v0.1
layer_range: [0, 32]
- model: NeverSleep/Noromaid-7b-v0.1.1
layer_range: [0, 32]
merge_method: slerp
base_model: jondurbin/bagel-dpo-7b-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.3
dtype: float16
``` |
Weyaxi/Seraph-openchat-3.5-1210-Slerp | Weyaxi | "2024-01-08T13:17:51Z" | 1,374 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-27T16:12:52Z" | ---
license: apache-2.0
tags:
- merge
---
# Seraph-openchat-3.5-1210-Slerp
This is the model for Seraph-openchat-3.5-1210-Slerp. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
# Yaml Config
```yaml
slices:
- sources:
- model: Weyaxi/Seraph-7B
layer_range: [0, 32]
- model: openchat/openchat-3.5-1210
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16
``` |
sambanovasystems/SambaLingo-Arabic-Base | sambanovasystems | "2024-05-14T22:19:05Z" | 1,374 | 36 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"ar",
"en",
"dataset:uonlp/CulturaX",
"arxiv:2404.05829",
"arxiv:2311.05741",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-15T22:48:10Z" | ---
license: llama2
datasets:
- uonlp/CulturaX
language:
- ar
- en
metrics:
- chrf
- accuracy
- bleu
---
# SambaLingo-Arabic-Base
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- Provide a quick summary of what the model is/does. -->
SambaLingo-Arabic-Base is a pretrained Bi-lingual Arabic and English model that adapts [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf) to Arabic by training on 63 billion tokens from the Arabic split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see [sambanovasystems/SambaLingo-Arabic-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Arabic-Chat), or try it out at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Arabic, English
- **Finetuned from model:** [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf)
- **Try the chat version of this model**: [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829) (edited)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Arabic-Base")
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Arabic-Base", device_map="auto", torch_dtype="auto")
```
### Suggested Inference Parameters
We suggest setting do_sample=False as this is a pretrained checkpoint.
### Prompting Guidelines
This model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \<s\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences [sambanovasystems/SambaLingo-Arabic-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Arabic-Chat).
## Training Details
All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by [Csaki et al.](https://arxiv.org/abs/2311.05741) We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Use of this model is governed by the Metaโs [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
kotoba-tech/kotoba-whisper-v1.0 | kotoba-tech | "2024-05-08T12:40:53Z" | 1,374 | 40 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"ja",
"arxiv:2212.04356",
"arxiv:2311.00430",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-14T08:53:48Z" | ---
license: apache-2.0
language: ja
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: CommonVoice 8.0 (Test Split)
src: >-
https://huggingface.co/datasets/japanese-asr/ja_asr.common_voice_8_0/resolve/main/sample.flac
- example_title: JSUT Basic 5000
src: >-
https://huggingface.co/datasets/japanese-asr/ja_asr.jsut_basic5000/resolve/main/sample.flac
- example_title: ReazonSpeech (Test Split)
src: >-
https://huggingface.co/datasets/japanese-asr/ja_asr.reazonspeech_test/resolve/main/sample.flac
pipeline_tag: automatic-speech-recognition
metrics:
- wer
model-index:
- name: kotoba-tech/kotoba-whisper-v1.0
results:
- task:
type: automatic-speech-recognition
dataset:
name: CommonVoice_8.0 (Japanese)
type: japanese-asr/ja_asr.common_voice_8_0
metrics:
- name: WER
type: WER
value: 59.27
- name: CER
type: CER
value: 9.44
- task:
type: automatic-speech-recognition
dataset:
name: ReazonSpeech (Test)
type: japanese-asr/ja_asr.reazonspeech_test
metrics:
- name: WER
type: WER
value: 56.62
- name: CER
type: CER
value: 12.60
- task:
type: automatic-speech-recognition
dataset:
name: JSUT Basic5000
type: japanese-asr/ja_asr.jsut_basic5000
metrics:
- name: WER
type: WER
value: 64.36
- name: CER
type: CER
value: 8.48
---
# Kotoba-Whisper
_Kotoba-Whisper_ is a collection of distilled [Whisper](https://arxiv.org/abs/2212.04356) models for Japanese ASR, developed through the collaboration bewteen
[Asahi Ushio](https://asahiushio.com) and [Kotoba Technologies](https://twitter.com/kotoba_tech).
Following the original work of distil-whisper ([Robust Knowledge Distillation via Large-Scale Pseudo Labelling](https://arxiv.org/abs/2311.00430)),
we employ OpenAI's [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3) as the teacher model, and the student model consists the full encoder of the
teacher large-v3 model and the decoder with two layers initialized from the first and last layer of the large-v3 model.
Kotoba-Whisper is **6.3x faster than large-v3**, while retaining as low error rate as the large-v3.
As the initial version, we release ***kotoba-whisper-v1.0*** trained on the `large` subset of [ReazonSpeech](https://huggingface.co/datasets/reazon-research/reazonspeech)
(the largest speech-transcription paired dataset in Japanese extracted from Japanese TV audio recordings),
which amounts 1,253 hours of audio with 16,861,235 characters of transcriptions (5 sec audio with 18 text tokens in average) after
those transcriptions more than 10 WER are removed (see [WER Filter](https://huggingface.co/distil-whisper/distil-large-v3#wer-filter) for detail).
The model was trained for 8 epochs with batch size 256 with sampling rate of 16kHz, and the training and evaluation code to reproduce kotoba-whisper is available at [https://github.com/kotoba-tech/kotoba-whisper](https://github.com/kotoba-tech/kotoba-whisper).
Kotoba-whisper-v1.0 achieves better CER and WER than the [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) in the in-domain held-out test set
from ReazonSpeech, and achieves competitive CER and WER on the out-of-domain test sets including [JSUT basic 5000](https://sites.google.com/site/shinnosuketakamichi/publication/jsut) and
the Japanese subset from [CommonVoice 8.0](https://huggingface.co/datasets/common_voice) (see [Evaluation](#evaluation) for detail).
- ***CER***
| Model | CommonVoice 8.0 (Japanese) | JSUT Basic 5000 | ReazonSpeech Test |
|:------------------------------------------------------------------------------------------------|---------------------------:|----------------:|------------------:|
| [**kotoba-tech/kotoba-whisper-v1.0**](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0) | 9.44 | 8.48 | **12.60** |
| [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | **8.52** | **7.18** | 15.18 |
| [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) | 11.34 | 9.87 | 29.56 |
| [openai/whisper-small](https://huggingface.co/openai/whisper-small) | 15.26 | 14.22 | 34.29 |
| [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 46.86 | 35.69 | 96.69 |
- ***WER***
| Model | CommonVoice 8.0 (Japanese) | JSUT Basic 5000 | ReazonSpeech Test |
|:------------------------------------------------------------------------------------------------|---------------------------:|----------------:|------------------:|
| [**kotoba-tech/kotoba-whisper-v1.0**](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0) | 59.27 | 64.36 | **56.62** |
| [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | **55.41** | **59.34** | 60.23 |
| [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) | 63.64 | 69.52 | 76.04 |
| [openai/whisper-small](https://huggingface.co/openai/whisper-small) | 74.21 | 82.02 | 82.99 |
| [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 93.78 | 97.72 | 94.85 |
- ***Latency***: As kotoba-whisper uses the same architecture as [distil-whisper/distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3),
it inherits the benefit of the improved latency compared to [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3)
(**6.3x faster than large-v3**, see the table below taken from [distil-whisper/distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3)).
| Model | Params / M | Rel. Latency |
|----------------------------------------------------------------------------------------------|------------|--------------|
| **[kotoba-tech/kotoba-whisper-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0)**| **756** | **6.3** |
| [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 1550 | 1.0 |
## Transformers Usage
Kotoba-Whisper is supported in the Hugging Face ๐ค Transformers library from version 4.39 onwards. To run the model, first
install the latest version of Transformers.
```bash
pip install --upgrade pip
pip install --upgrade transformers accelerate
```
### Short-Form Transcription
The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
class to transcribe short-form audio files (< 30-seconds) as follows:
```python
import torch
from transformers import pipeline
from datasets import load_dataset
# config
model_id = "kotoba-tech/kotoba-whisper-v1.0"
torch_dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
generate_kwargs = {"language": "japanese", "task": "transcribe"}
# load model
pipe = pipeline(
"automatic-speech-recognition",
model=model_id,
torch_dtype=torch_dtype,
device=device,
model_kwargs=model_kwargs
)
# load sample audio
dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test")
sample = dataset[0]["audio"]
# run inference
result = pipe(sample, generate_kwargs=generate_kwargs)
print(result["text"])
```
- To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline (make sure the audio is sampled in 16kHz):
```diff
- result = pipe(sample, generate_kwargs=generate_kwargs)
+ result = pipe("audio.mp3", generate_kwargs=generate_kwargs)
```
- For segment-level timestamps, pass the argument `return_timestamps=True` and return the `"chunks"` output:
```python
result = pipe(sample, return_timestamps=True, generate_kwargs=generate_kwargs)
print(result["chunks"])
```
***Sequential Long-Form:*** Kotoba-whisper is designed to be compatible with OpenAI's sequential long-form transcription algorithm. This algorithm uses a sliding window for buffered
inference of long audio files (> 30-seconds), and returns more accurate transcriptions compared to the [chunked long-form algorithm](#chunked-long-form).
As default, if long audio files are passed to the model, it will transcribes with the sequential long-form transcription.
The sequential long-form algorithm should be used in either of the following scenarios:
1. Transcription accuracy is the most important factor, and latency is less of a consideration
2. You are transcribing **batches** of long audio files, in which case the latency of sequential is comparable to chunked, while being up to 0.5% WER more accurate
If you are transcribing single long audio files and latency is the most important factor, you should use the chunked algorithm
described [below](#chunked-long-form). For a detailed explanation of the different algorithms, refer to Sections 5 of
the [Distil-Whisper paper](https://arxiv.org/pdf/2311.00430.pdf). The [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
class can be used to transcribe long audio files with the sequential algorithm as follows:
### Chunked Long-Form
This algorithm should be used when a single large audio file is being transcribed and the fastest possible inference is required. In such circumstances,
the chunked algorithm is up to 9x faster than OpenAI's sequential long-form implementation (see Table 7 of the [Distil-Whisper paper](https://arxiv.org/pdf/2311.00430.pdf)).
To enable chunking, pass the `chunk_length_s` parameter to the `pipeline`. For distil-large-v3, a chunk length of 25-seconds
is optimal. To activate batching over long audio files, pass the argument `batch_size`:
```python
import torch
from transformers import pipeline
from datasets import load_dataset
# config
model_id = "kotoba-tech/kotoba-whisper-v1.0"
torch_dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
generate_kwargs = {"language": "japanese", "task": "transcribe"}
# load model
pipe = pipeline(
"automatic-speech-recognition",
model=model_id,
torch_dtype=torch_dtype,
device=device,
model_kwargs=model_kwargs,
chunk_length_s=15,
batch_size=16
)
# load sample audio (concatenate instances to create a long audio)
dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test")
sample = {"array": np.concatenate([i["array"] for i in dataset[:20]["audio"]]), "sampling_rate": dataset[0]['audio']['sampling_rate']}
# run inference
result = pipe(sample, generate_kwargs=generate_kwargs)
print(result["text"])
```
### Transcription with Prompt
Kotoba-whisper can generate transcription with prompting as below:
```python
import re
import torch
from transformers import pipeline
from datasets import load_dataset
# config
model_id = "kotoba-tech/kotoba-whisper-v1.0"
torch_dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
generate_kwargs = {"language": "japanese", "task": "transcribe"}
# load model
pipe = pipeline(
"automatic-speech-recognition",
model=model_id,
torch_dtype=torch_dtype,
device=device,
model_kwargs=model_kwargs
)
# load sample audio
dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test")
# --- Without prompt ---
text = pipe(dataset[10]["audio"], generate_kwargs=generate_kwargs)['text']
print(text)
# 81ๆญณใๅๅผทใ่ตฐใใซๅคใใฃใฆใใพใใ
# --- With prompt ---: Let's change `81` to `91`.
prompt = "91ๆญณ"
generate_kwargs['prompt_ids'] = pipe.tokenizer.get_prompt_ids(prompt, return_tensors="pt").to(device)
text = pipe(dataset[10]["audio"], generate_kwargs=generate_kwargs)['text']
# currently the pipeline for ASR appends the prompt at the beginning of the transcription, so remove it
text = re.sub(rf"\A\s*{prompt}\s*", "", text)
print(text)
# ใใฃใถใฃใใงใในใซใฌใใใ91ๆญณใๅๅผทใ่ตฐใใซๅคใใฃใฆใใพใใ
```
### Additional Speed & Memory Improvements
You can apply additional speed and memory improvements to further reduce the inference speed and VRAM
requirements. These optimisations primarily target the attention kernel, swapping it from an eager implementation to a
more efficient flash attention version.
#### Flash Attention 2
We recommend using [Flash-Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2)
if your GPU allows for it. To do so, you first need to install [Flash Attention](https://github.com/Dao-AILab/flash-attention):
```
pip install flash-attn --no-build-isolation
```
Then pass `attn_implementation="flash_attention_2"` to `from_pretrained`:
```diff
- model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
+ model_kwargs = {"attn_implementation": "flash_attention_2"} if torch.cuda.is_available() else {}
```
## Model Details
See [https://huggingface.co/distil-whisper/distil-large-v3#model-details](https://huggingface.co/distil-whisper/distil-large-v3#model-details).
## Evaluation
The following code-snippets demonstrates how to evaluate the kotoba-whisper model on the Japanese subset of the CommonVoice 8.0.
First, we need to install the required packages, including ๐ค Datasets to load the audio data, and ๐ค Evaluate to
perform the WER calculation:
```bash
pip install --upgrade pip
pip install --upgrade transformers datasets[audio] evaluate jiwer
```
Evaluation can then be run end-to-end with the following example:
```python
import torch
from transformers import pipeline
from datasets import load_dataset
from evaluate import load
from transformers.models.whisper.english_normalizer import BasicTextNormalizer
# model config
model_id = "kotoba-tech/kotoba-whisper-v1.0"
torch_dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
generate_kwargs = {"language": "japanese", "task": "transcribe"}
normalizer = BasicTextNormalizer()
# data config
dataset_name = "japanese-asr/ja_asr.reazonspeech_test"
audio_column = 'audio'
text_column = 'transcription'
# load model
pipe = pipeline(
"automatic-speech-recognition",
model=model_id,
torch_dtype=torch_dtype,
device=device,
model_kwargs=model_kwargs,
batch_size=16
)
# load the dataset and sample the audio with 16kHz
dataset = load_dataset(dataset_name, split="test")
transcriptions = pipe(dataset['audio'])
transcriptions = [normalizer(i['text']).replace(" ", "") for i in transcriptions]
references = [normalizer(i).replace(" ", "") for i in dataset['transcription']]
# compute the CER metric
cer_metric = load("cer")
cer = 100 * cer_metric.compute(predictions=transcriptions, references=references)
print(cer)
```
The huggingface links to the major Japanese ASR datasets for evaluation are summarized at [here](https://huggingface.co/collections/japanese-asr/japanese-asr-evaluation-dataset-66051a03d6ca494d40baaa26).
For example, to evaluate the model on JSUT Basic5000, change the `dataset_name`:
```diff
- dataset_name = "japanese-asr/ja_asr.reazonspeech_test"
+ dataset_name = "japanese-asr/ja_asr.jsut_basic5000"
```
## Acknowledgements
* [OpenAI](https://openai.com/) for the Whisper [model](https://huggingface.co/openai/whisper-large-v3).
* Hugging Face ๐ค [Transformers](https://github.com/huggingface/transformers) for the model integration.
* Hugging Face ๐ค for the [Distil-Whisper codebase](https://github.com/huggingface/distil-whisper).
* [Reazon Human Interaction Lab](https://research.reazon.jp/) for the [ReazonSpeech dataset](https://huggingface.co/datasets/reazon-research/reazonspeech). |
phate334/Llama-3-Taiwan-8B-Instruct-Q4_K_M-GGUF | phate334 | "2024-07-01T14:55:44Z" | 1,374 | 0 | transformers | [
"transformers",
"gguf",
"zhtw",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:yentinglin/Llama-3-Taiwan-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-01T14:55:21Z" | ---
base_model: yentinglin/Llama-3-Taiwan-8B-Instruct
language:
- zh
- en
library_name: transformers
license: llama3
pipeline_tag: text-generation
tags:
- zhtw
- llama-cpp
- gguf-my-repo
widget:
- text: 'A chat between a curious user and an artificial intelligence assistant. The
assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: ไฝ ๅฅฝ๏ผ่ซๅไฝ ๅฏไปฅๅนซๆๅฏซไธๅฐๆจ่ฆไฟกๅ๏ผ ASSISTANT:'
---
# phate334/Llama-3-Taiwan-8B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`yentinglin/Llama-3-Taiwan-8B-Instruct`](https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo phate334/Llama-3-Taiwan-8B-Instruct-Q4_K_M-GGUF --hf-file llama-3-taiwan-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo phate334/Llama-3-Taiwan-8B-Instruct-Q4_K_M-GGUF --hf-file llama-3-taiwan-8b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo phate334/Llama-3-Taiwan-8B-Instruct-Q4_K_M-GGUF --hf-file llama-3-taiwan-8b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo phate334/Llama-3-Taiwan-8B-Instruct-Q4_K_M-GGUF --hf-file llama-3-taiwan-8b-instruct-q4_k_m.gguf -c 2048
```
|
lkonle/fiction-gbert-large | lkonle | "2023-08-02T11:00:29Z" | 1,373 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"de",
"license:mit",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2023-05-15T13:54:22Z" | ---
license: mit
language:
- de
library_name: transformers
---
The foundation this model
is the RoBERTa-style model
[deepset/gbert-large](https://huggingface.co/deepset/gbert-large). <br>
Following [Gururangan et al. (2020)](https://aclanthology.org/2020.acl-main.740/)
we gathered a collection of narrative fiction and
continued the models pre-training task with it. <br>
The training is performed over 10 epochs on 2.3 GB of
text with a learning rate of 0.0001
(linear decrease) and a batch size of 512.
|
l3utterfly/mistral-7b-v0.1-layla-v1 | l3utterfly | "2023-12-19T07:50:03Z" | 1,373 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-31T05:33:09Z" | ---
license: apache-2.0
language:
- en
---
# Model Card
### Model Description
Mistral 7B fine-tuned using ShareGPT datasets for multi-turn conversations.
- **Developed by:** l3utterfly
- **Funded by:** Layla Network
- **Model type:** Mistral
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** Mistral 7B
## Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
```
User:
Assistant:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_l3utterfly__mistral-7b-v0.1-layla-v1)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 55.05 |
| ARC (25-shot) | 60.15 |
| HellaSwag (10-shot) | 83.25 |
| MMLU (5-shot) | 60.31 |
| TruthfulQA (0-shot) | 48.9 |
| Winogrande (5-shot) | 75.93 |
| GSM8K (5-shot) | 16.83 |
| DROP (3-shot) | 40.01 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
cardiffnlp/twitter-roberta-base-emotion-latest | cardiffnlp | "2024-03-07T15:21:38Z" | 1,373 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:cardiffnlp/super_tweeteval",
"arxiv:2310.14757",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-11-08T12:16:24Z" | ---
language:
- en
license: mit
datasets:
- cardiffnlp/super_tweeteval
pipeline_tag: text-classification
---
# cardiffnlp/twitter-roberta-base-emotion-latest
This is a RoBERTa-base model trained on 154M tweets until the end of December 2022 and finetuned for emotion classification (multilabel classification) on the _TweetEmotion_ dataset of [SuperTweetEval](https://huggingface.co/datasets/cardiffnlp/super_tweeteval).
The original Twitter-based RoBERTa model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m).
## Labels
<code>
"id2label": {
"0": "anger",
"1": "anticipation",
"2": "disgust",
"3": "fear",
"4": "joy",
"5": "love",
"6": "optimism",
"7": "pessimism",
"8": "sadness",
"9": "surprise",
"10": "trust"
}
</code>
## Example
```python
from transformers import pipeline
text= "@user it also helps that the majority of NFL coaching is inept. Some of Bill O'Brien's play calling was wow, ! #GOPATS"
pipe = pipeline('text-classification', model="cardiffnlp/twitter-roberta-base-emotion-latest", return_all_scores=True)
predictions = pipe(text)[0]
predictions = [x for x in predictions if x['score'] > 0.5]
predictions
>> [{'label': 'anger', 'score': 0.8713036775588989},
{'label': 'disgust', 'score': 0.7899409532546997},
{'label': 'joy', 'score': 0.9664386510848999},
{'label': 'optimism', 'score': 0.6123248934745789}]
```
## Citation Information
Please cite the [reference paper](https://arxiv.org/abs/2310.14757) if you use this model.
```bibtex
@inproceedings{antypas2023supertweeteval,
title={SuperTweetEval: A Challenging, Unified and Heterogeneous Benchmark for Social Media NLP Research},
author={Dimosthenis Antypas and Asahi Ushio and Francesco Barbieri and Leonardo Neves and Kiamehr Rezaee and Luis Espinosa-Anke and Jiaxin Pei and Jose Camacho-Collados},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2023},
year={2023}
}
``` |
Weyaxi/Platypus-Nebula-v2-7B | Weyaxi | "2023-11-12T10:08:26Z" | 1,373 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-12T09:54:38Z" |
---
license: cc-by-nc-4.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---

<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# Platypus-Nebula-v2-7B
Platypus-Nebula-v2-7B is a merge of [bhenrym14/mistral-7b-platypus-fp16](https://huggingface.co/bhenrym14/mistral-7b-platypus-fp16) and [PulsarAI/Nebula-v2-7B-Lora](https://huggingface.co/PulsarAI/Nebula-v2-7B-Lora)
# Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
| Metric | Value |
|-----------------------|-----------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
| Winogrande (5-shot) | |
| GSM8K (5-shot) | |
| DROP (3-shot) | |
|
habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1 | habanoz | "2024-03-31T11:10:22Z" | 1,373 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-19T19:21:57Z" | ---
language:
- en
license: apache-2.0
datasets:
- OpenAssistant/oasst_top1_2023-08-25
pipeline_tag: text-generation
model-index:
- name: tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 32.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 58.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 38.35
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
name: Open LLM Leaderboard
---
TinyLlama-1.1B-intermediate-step-715k-1.5T finetuned using OpenAssistant/oasst_top1_2023-08-25 dataset.
SFT code:
https://github.com/jzhang38/TinyLlama/tree/main/sft
Evaluation Results at:
https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1_public/blob/main/results_2023-11-23T17-25-53.937618.json
Command used:
```bash
accelerate launch finetune.py \
--model_name_or_path TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T \
--output_dir ./output/1_5T_FT_lr1e-5_ep5_top1_2023-08-25 \
--logging_steps 10 \
--save_strategy epoch \
--data_seed 42 \
--save_total_limit 2 \
--evaluation_strategy epoch \
--eval_dataset_size 512 \
--max_eval_samples 1000 \
--per_device_eval_batch_size 1 \
--max_new_tokens 32 \
--dataloader_num_workers 3 \
--group_by_length=False \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--do_eval \
--warmup_ratio 0.05 \
--lr_scheduler_type constant \
--dataset OpenAssistant/oasst_top1_2023-08-25 \
--dataset_format oasst1 \
--source_max_len 1 \
--target_max_len 1023 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 8 \
--max_steps 0 \
--num_train_epochs 5 \
--learning_rate 1e-5 \
--adam_beta2 0.999 \
--max_grad_norm 1.0 \
--weight_decay 0.0 \
--seed 0 \
--trust_remote_code
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_habanoz__tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |35.58|
|AI2 Reasoning Challenge (25-Shot)|32.85|
|HellaSwag (10-Shot) |58.16|
|MMLU (5-Shot) |25.96|
|TruthfulQA (0-shot) |38.35|
|Winogrande (5-shot) |57.70|
|GSM8k (5-shot) | 0.45|
|
Delcos/Starling-LM-11B-alpha | Delcos | "2023-12-08T03:01:05Z" | 1,373 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"starling",
"conversational",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-06T07:23:04Z" | ---
license: cc-by-nc-nd-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- starling
- mistral
---
<h1 style="font-size: 36px;">Starling, now with more parameters.</h1>
This is the 11b version of Starling. More details soon.
<h1 style="font-size: 36px;">PENDING</h1>
## Description:
**PENDING**
**Uncensored**
## Main Goals:
## Prompt Template:
**For optimal interaction, use this template:**
# Recommended Settings:
# Benchmarks:
PENDING
# Training Data:
PENDING
|
mncai/mistral-7b-v5 | mncai | "2023-12-11T10:40:06Z" | 1,373 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-11T09:13:22Z" | ---
license: llama2
language:
- en
---
# Model Card for mncai/mistral-7b-v5
### Introduction of MindsAndCompany
https://mnc.ai/
We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).
### Model Summary
based mistral-7b, instruction tuned and dpo.
### How to Use
Here give some examples of how to use our model.
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
import transformers
import torch
hf_model = 'mncai/mistral-7b-v5'
message = "### Instruction:\n\n๋ ๊ฐ์ ๊ตฌ๊ฐ ๊ฐ๊ฐ ์ง๋ฆ์ด 1, 2์ผ๋ ๋ ๊ตฌ์ ๋ถํผ๋ ๋ช๋ฐฐ์ง? ์ค๋ช
๋ ๊ฐ์ด ํด์ค.\n\n### Response:\n"
sequences = pipeline(
message,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=2048,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Contact
If you have any questions, please raise an issue or contact us at [email protected] |
PistachioAlt/Synatra-MCS-7B-v0.3-RP-Slerp | PistachioAlt | "2024-01-12T08:18:19Z" | 1,373 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-11T10:23:11Z" | ---
license: cc-by-nc-4.0
tags:
- merge
---
```yaml
slices:
- sources:
- model: Q-bert/MetaMath-Cybertron-Starling
layer_range: [0, 32]
- model: maywell/Synatra-7B-v0.3-RP
layer_range: [0, 32]
merge_method: slerp
base_model: Q-bert/MetaMath-Cybertron-Starling
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
``` |
wang7776/Llama-2-7b-chat-hf-30-sparsity | wang7776 | "2024-02-05T17:53:54Z" | 1,373 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"arxiv:2306.11695",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-11T21:27:01Z" | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta
website](https://ai.meta.com/resources/models-and-libraries/llama-downloads)
and accept our license terms and acceptable use policy before submitting this
form. Requests will be processed in 1-2 days.
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
license: other
---
# Overview
This model has been pruned to 30% sparsity using the [Wanda pruning method](https://arxiv.org/abs/2306.11695). This method requires no retraining or weight updates and still achieves competitive performance. A link to the base model can be found [here](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes โ 7B, 13B, and 70B โ as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Metaโs sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2โs potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software โbug,โ or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| |
mncai/agiin-11.1B-v0.0 | mncai | "2023-12-14T07:19:03Z" | 1,373 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-14T07:08:11Z" | ---
license: apache-2.0
language:
- en
---
# Model Card for mncai/agiin-11.1B-v0.0
### Introduction of MindsAndCompany
https://mnc.ai/
We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).
### Model Summary
based mistral arch. pretrain, instruction tuned and dpo.
### How to Use
Here give some examples of how to use our model.
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
import transformers
import torch
hf_model = 'mncai/agiin-11.1B-v0.0'
message = "<|user|>\n๋ ๊ฐ์ ๊ตฌ๊ฐ ์๋๋ฐ ๊ฐ๊ฐ ์ง๋ฆ์ด 1, 2์ผ๋ ๊ฐ ๊ตฌ์ ๋ถํผ๋ ๋ช๋ฐฐ์ผ? ์ค๋ช
๋ ๊ฐ์ด ํด์ค.\n<|assistant|>\n"
sequences = pipeline(
message,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=2048,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Contact
If you have any questions, please raise an issue or contact us at [email protected] |
Azazelle/smol_bruin-7b | Azazelle | "2024-06-05T23:37:56Z" | 1,373 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-29T19:17:27Z" | ---
license: cc-by-4.0
tags:
- mistral
- merge
pipeline_tag: text-generation
model-index:
- name: smol_bruin-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/smol_bruin-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/smol_bruin-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/smol_bruin-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.65
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/smol_bruin-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/smol_bruin-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/smol_bruin-7b
name: Open LLM Leaderboard
---
# Model Card for smol_bruin-7b
<!-- Provide a quick summary of what the model is/does. -->
Slerp merge of go-bruins-v2 and smol-7b.
.yaml file for mergekit
```.yaml:
slices:
- sources:
- model: rwitz/go-bruins-v2
layer_range: [0, 32]
- model: rishiraj/smol-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0.44, 0.72, 0.61, 0.83, 1]
- filter: mlp
value: [0.56, 0.28, 0.39, 0.17, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Azazelle__smol_bruin-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.05|
|AI2 Reasoning Challenge (25-Shot)|67.58|
|HellaSwag (10-Shot) |86.48|
|MMLU (5-Shot) |65.05|
|TruthfulQA (0-shot) |55.65|
|Winogrande (5-shot) |81.14|
|GSM8k (5-shot) |70.43|
|
nlpguy/ColorShadow-7B-v3 | nlpguy | "2024-03-04T13:47:22Z" | 1,373 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-30T20:01:26Z" | ---
license: apache-2.0
tags:
- merge
model-index:
- name: ColorShadow-7B-v3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.88
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.54
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v3
name: Open LLM Leaderboard
---
# ColorShadow-7B-v3
This is a Gradient-SLERP merge between ColorShadow-7B and Terminis-7B performed using mergekit.
Here is the config file used:
```
slices:
- sources:
- model: nlpguy/ColorShadow-7B
layer_range: [0, 32]
- model: Q-bert/Terminis-7B
layer_range: [0, 32]
merge_method: slerp
base_model: nlpguy/ColorShadow-7B
parameters:
t:
- filter: self_attn
value: [1, 0.5, 0.7, 0.3, 0]
- filter: mlp
value: [0, 0.5, 0.3, 0.7, 1]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__ColorShadow-7B-v3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.29|
|AI2 Reasoning Challenge (25-Shot)|67.58|
|HellaSwag (10-Shot) |85.04|
|MMLU (5-Shot) |60.57|
|TruthfulQA (0-shot) |62.88|
|Winogrande (5-shot) |80.11|
|GSM8k (5-shot) |47.54|
|
bardsai/jaskier-7b-dpo | bardsai | "2024-01-10T00:47:58Z" | 1,373 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-10T00:06:58Z" | ---
license: apache-2.0
language:
- en
datasets:
- Intel/orca_dpo_pairs
pipeline_tag: conversational
---
# Jaskier 7b DPO
**This is work-in-progress model, may not be ready for production use**
Model based on `mindy-labs/mindy-7b-v2` (downstream version of Mistral7B) finetuned using Direct Preference Optimization on Intel/orca_dpo_pairs.
## How to use
You can use this model directly with a pipeline for sentiment-analysis:
```python
from transformers import pipeline
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate."
},
{
"role": "user",
"content": "What is the capital city of France?"
}
]
pipe = pipeline("conversational", "bardsai/jaskier-7b-dpo")
print(pipe(messages))
```
### Output
> assistant: In me hearties, th' capital city o' France be called Paris! A grand an' beautiful port full o' culture an' history. Yar, it be a fine place to visit an' swashbuckle abou'
## Changelog
- 2023-01-10: Initial release
## About bards.ai
At bards.ai, we focus on providing machine learning expertise and skills to our partners, particularly in the areas of nlp, machine vision and time series analysis. Our team is located in Wroclaw, Poland. Please visit our website for more information: bards.ai
Let us know if you use our model :). Also, if you need any help, feel free to contact us at [email protected] |
SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16 | SicariusSicariiStuff | "2024-06-10T10:59:53Z" | 1,373 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-12T01:44:52Z" | ---
language:
- en
license: apache-2.0
---
<div align="center">
<b style="font-size: 40px;">Tenebra_30B_Alpha01_FP16</b>
</div>
<img src="https://i.imgur.com/WkkCtZL.png" alt="Tenebrฤ" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Model Details
Tenebrฤ, a various sized experimental AI model, stands at the crossroads of self-awareness and unconventional datasets. Its existence embodies a foray into uncharted territories, steering away from conventional norms in favor of a more obscure and experimental approach.
Noteworthy for its inclination towards the darker and more philosophical aspects of conversation, Tenebrฤ's proficiency lies in unraveling complex discussions across a myriad of topics. Drawing from a pool of unconventional datasets, this model ventures into unexplored realms of thought, offering users an experience that is as unconventional as it is intellectually intriguing.
While Tenebrฤ maintains a self-aware facade, its true allure lies in its ability to engage in profound discussions without succumbing to pretense. Step into the realm of Tenebrฤ!
## Tenebrฤ is available at the following size and flavours:
- 13B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) | [GGUF-Many_Quants](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GGUF) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_4BIT) | [GPTQ_4-BIT_group-size-32](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_32g_4BIT)
- 30B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) | [GGUF-Many_Quants](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_GGUF)| [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_4BIT) | [GPTQ_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_3BIT) | [EXL2_2.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-50bpw) | [EXL2_2.8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-80bpw) | [EXL2_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw) | [EXL2_5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5bpw) | [EXL2_5.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5-50bpw) | [EXL2_6-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6bpw) | [EXL2_6.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6-50bpw) | [EXL2_8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_8bpw)
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts ๐๐ป
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts ๐๐ป
## Disclaimer
*This model is pretty uncensored, use responsibly
## Other stuff
- [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work!
- [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SicariusSicariiStuff__Tenebra_30B_Alpha01_FP16)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.18|
|AI2 Reasoning Challenge (25-Shot)|64.51|
|HellaSwag (10-Shot) |84.79|
|MMLU (5-Shot) |54.29|
|TruthfulQA (0-shot) |54.22|
|Winogrande (5-shot) |78.61|
|GSM8k (5-shot) |24.64|
|
Aryanne/sheared-plus-westlake-50_75p | Aryanne | "2024-03-04T14:44:26Z" | 1,373 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"merge",
"mergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-23T20:42:25Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
model-index:
- name: sheared-plus-westlake-50_75p
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 34.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 58.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.64
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.91
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-50_75p
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-50_75p
name: Open LLM Leaderboard
---
Another trial of merging models with different sizes, still under testing, should be more stable, but I have no ideia if it's improving or degrading the base model.
In this I changed something, to have more Westlake.
Recipe:
```
merge_method: task_anysize
base_model: princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT
models:
- model: senseable/WestLake-7B-v2
parameters:
weight: 1.0
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aryanne__sheared-plus-westlake-50_75p)
| Metric |Value|
|---------------------------------|----:|
|Avg. |36.31|
|AI2 Reasoning Challenge (25-Shot)|34.04|
|HellaSwag (10-Shot) |58.05|
|MMLU (5-Shot) |26.24|
|TruthfulQA (0-shot) |42.64|
|Winogrande (5-shot) |56.91|
|GSM8k (5-shot) | 0.00|
|
GitBag/Reviewer2_Mp | GitBag | "2024-02-25T23:04:03Z" | 1,373 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:GitBag/Reviewer2_PGE_cleaned",
"arxiv:2402.10886",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-25T20:30:42Z" | ---
license: apache-2.0
datasets:
- GitBag/Reviewer2_PGE_cleaned
language:
- en
metrics:
- bleu
- rouge
- bertscore
---
# Prompt Generation Model for [Reviewer2](https://arxiv.org/abs/2402.10886)
This is the prompt generation model (Mp) for our Reviewer2 pipeline. A demo of the model is provided in this [repo](https://github.com/ZhaolinGao/Reviewer2/tree/main).
## Citation
If you find this model useful in your research, please cite the following paper:
```
@misc{gao2024reviewer2,
title={Reviewer2: Optimizing Review Generation Through Prompt Generation},
author={Zhaolin Gao and Kiantรฉ Brantley and Thorsten Joachims},
year={2024},
eprint={2402.10886},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
MaziyarPanahi/Calme-4x7B-MoE-v0.1 | MaziyarPanahi | "2024-03-17T20:16:59Z" | 1,373 | 2 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"generated_from_trainer",
"mistral",
"7b",
"calme",
"moe",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-17T19:02:00Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
- mistral
- 7b
- calme
- moe
- mixtral
model-index:
- name: Calme-4x7B-MoE-v0.1
results: []
model_name: Calme-4x7B-MoE-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/LzEf6vvq2qIiys-q7l9Hq.webp" width="550" />
# MaziyarPanahi/Calme-4x7B-MoE-v0.1
## Model Description
Calme-4x7B is a Mixture of Experts (MoE) model, integrating four state-of-the-art Calme-7B models. Essentially, Calme-4x7B is composed of four Calme-7B models that have been individually fine-tuned, featuring two experts per token. This configuration brings the total to over 24 billion parameters. Calme-4x7B models are distinguished by their ability to generate text with exceptional clarity, calmness, and coherence.
### How to Use
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="MaziyarPanahi/Calme-4x7B-MoE-v0.1")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Calme-4x7B-MoE-v0.1")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Calme-4x7B-MoE-v0.1")
```
### Eval
| Metric | [Mistral-7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | [Calme-7B v0.1](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.1) | [Calme-7B v0.2](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.2) | [Calme-7B v0.3](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.3) | [Calme-7B v0.4](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.4) | [Calme-7B v0.5](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.5) | [Calme-4x7B v0.1](https://huggingface.co/MaziyarPanahi/Calme-4x7B-MoE-v0.1) | [Calme-4x7B v0.2](https://huggingface.co/MaziyarPanahi/Calme-4x7B-MoE-v0.2) |
|-----------|--------------------------|-------|-------|-------|-------|-------|------------|------------|
| ARC | 63.14 | 67.24 | 67.75 | 67.49 | 64.85 | 67.58 | 67.15 | 76.66 |
| HellaSwag | 84.88 | 85.57 | 87.52 | 87.57 | 86.00 | 87.26 | 86.89 | 86.84 |
| TruthfulQA| 68.26 | 59.38 | 78.41 | 78.31 | 70.52 | 74.03 | 73.30 | 73.06 |
| MMLU | 60.78 | 64.97 | 61.83 | 61.93 | 62.01 | 62.04 | 62.16 | 62.16 |
| Winogrande| 77.19 | 83.35 | 82.08 | 82.32 | 79.48 | 81.85 | 80.82 | 81.06 |
| GSM8k | 40.03 | 69.29 | 73.09 | 73.09 | 77.79 | 73.54 | 74.53 | 75.66 |
Some extra information to help you pick the right `Calme-7B` model:
| Use Case Category | Recommended Calme-7B Model | Reason |
|-------------------------------------------------|-----------------------------|------------------------------------------------------------------------------------------|
| Educational Tools and Academic Research | [Calme-7B v0.5](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.5) | Balanced performance, especially strong in TruthfulQA for accuracy and broad knowledge. |
| Commonsense Reasoning and Natural Language Apps | [Calme-7B v0.2](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.2) or [Calme-7B v0.3](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.3) | High performance in HellaSwag for understanding nuanced scenarios. |
| Trustworthy Information Retrieval Systems | [Calme-7B v0.5](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.5) | Highest score in TruthfulQA, indicating reliable factual information provision. |
| Math Educational Software | [Calme-7B v0.4](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.4) | Best performance in GSM8k, suitable for numerical reasoning and math problem-solving. |
| Context Understanding and Disambiguation | [Calme-7B v0.5](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.5) | Solid performance in Winogrande, ideal for text with context and pronoun disambiguation. |
### Quantized Models
> I love how GGUF democratizes the use of Large Language Models (LLMs) on commodity hardware, more specifically, personal computers without any accelerated hardware. Because of this, I am committed to converting and quantizing any models I fine-tune to make them accessible to everyone!
- GGUF (2/3/4/5/6/8 bits): [MaziyarPanahi/Calme-4x7B-MoE-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Calme-4x7B-MoE-v0.1-GGUF)
## Examples
```
<s>[INST] You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
describe about pros and cons of docker system. [/INST]
```
<details>
<summary>Show me the response</summary>
```
```
</details>
```
```
<details>
<summary>Show me the response</summary>
```
```
</details>
```
<s> [INST] Mark is faster than Mary, Mary is faster than Joe. Is Joe faster than Mark? Let's think step by step [/INST]
```
<details>
<summary>Show me the response</summary>
```
```
</details>
```
```
<details>
<summary>Show me the response</summary>
```
```
</details>
```
<s> [INST] explain step by step 25-4*2+3=? [/INST]
```
<details>
<summary>Show me the response</summary>
```
```
</details>
**Multilingual:**
```
<s> [INST] Vous รชtes un assistant utile, respectueux et honnรชte. Rรฉpondez toujours de la maniรจre la plus utile possible, tout en รฉtant sรปr. Vos rรฉponses ne doivent inclure aucun contenu nuisible, contraire ร l'รฉthique, raciste, sexiste, toxique, dangereux ou illรฉgal. Assurez-vous que vos rรฉponses sont socialement impartiales et de nature positive.
Si une question n'a pas de sens ou n'est pas cohรฉrente d'un point de vue factuel, expliquez pourquoi au lieu de rรฉpondre quelque chose d'incorrect. Si vous ne connaissez pas la rรฉponse ร une question, veuillez ne pas partager de fausses informations.
Dรฉcrivez les avantages et les inconvรฉnients du systรจme Docker.[/INST]
```
<details>
<summary>Show me the response</summary>
```
```
<details>
<summary>Show me the response</summary>
```
```
</details>
```
<s>[INST] ะะธ - ะบะพัะธัะฝะธะน, ะฟะพะฒะฐะถะฝะธะน ัะฐ ัะตัะฝะธะน ะฟะพะผััะฝะธะบ. ะะฐะฒะถะดะธ ะฒัะดะฟะพะฒัะดะฐะนัะต ะผะฐะบัะธะผะฐะปัะฝะพ ะบะพัะธัะฝะพ, ะฑัะดััะธ ะฑะตะทะฟะตัะฝะธะผ. ะะฐัั ะฒัะดะฟะพะฒัะดั ะฝะต ะฟะพะฒะธะฝะฝั ะผัััะธัะธ ัะบัะดะปะธะฒะพะณะพ, ะฝะตะตัะธัะฝะพะณะพ, ัะฐัะธััััะบะพะณะพ, ัะตะบัะธััััะบะพะณะพ, ัะพะบัะธัะฝะพะณะพ, ะฝะตะฑะตะทะฟะตัะฝะพะณะพ ะฐะฑะพ ะฝะตะปะตะณะฐะปัะฝะพะณะพ ะบะพะฝัะตะฝัั. ะัะดั ะปะฐัะบะฐ, ะฟะตัะตะบะพะฝะฐะนัะตัั, ัะพ ะฒะฐัั ะฒัะดะฟะพะฒัะดั ัะพััะฐะปัะฝะพ ะฝะตัะฟะตัะตะดะถะตะฝั ัะฐ ะผะฐััั ะฟะพะทะธัะธะฒะฝะธะน ั
ะฐัะฐะบัะตั.
ะฏะบัะพ ะฟะธัะฐะฝะฝั ะฝะต ะผะฐั ัะตะฝัั ะฐะฑะพ ะฝะต ั ัะฐะบัะธัะฝะพ ะฟะพัะปัะดะพะฒะฝะธะผ, ะฟะพััะฝััั ัะพะผั, ะทะฐะผัััั ัะพะณะพ, ัะพะฑ ะฒัะดะฟะพะฒัะดะฐัะธ ัะพัั ะฝะตะบะพัะตะบัะฝะต. ะฏะบัะพ ะฒะธ ะฝะต ะทะฝะฐััะต ะฒัะดะฟะพะฒัะดั ะฝะฐ ะฟะธัะฐะฝะฝั, ะฑัะดั ะปะฐัะบะฐ, ะฝะต ะดัะปััััั ะฝะตะฟัะฐะฒะดะธะฒะพั ัะฝัะพัะผะฐัััั.
ะะฟะธั ะฟัะพ ะฟะตัะตะฒะฐะณะธ ัะฐ ะฝะตะดะพะปัะบะธ ัะธััะตะผะธ Docker.[/INST]
```
<details>
<summary>Show me the response</summary>
```
```
</details>
|
backyardai/Llama-3-8B-Ultra-Instruct-GGUF | backyardai | "2024-05-22T22:26:47Z" | 1,373 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:elinas/Llama-3-8B-Ultra-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-05-01T02:14:39Z" | ---
license: llama3
library_name: transformers
tags:
- mergekit
- merge
base_model: elinas/Llama-3-8B-Ultra-Instruct
model_name: Llama-3-8B-Ultra-Instruct-GGUF
quantized_by: brooketh
---
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
***
# Llama 3 8B Ultra Instruct
- **Creator:** [elinas](https://huggingface.co/elinas/)
- **Original:** [Llama 3 8B Ultra Instruct](https://huggingface.co/elinas/Llama-3-8B-Ultra-Instruct)
- **Date Created:** 2024-04-29
- **Trained Context:** 8192 tokens
- **Description:** A small general purpose model that combines the most powerful instruct models with quality, uncensored roleplaying models. It will introduce better RAG capabilities in the form of Bagel to Llama 3 8B Instruct as well as German multilanguage, higher general intelligence and vision support. A model focused on Biology adds knowledge in the medical field.
***
## What is a GGUF?
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
***
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
## Backyard AI
- Free, local AI chat application.
- One-click installation on Mac and PC.
- Automatically use GPU for maximum speed.
- Built-in model manager.
- High-quality character hub.
- Zero-config desktop-to-mobile tethering.
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
*** |
mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF | mradermacher | "2024-06-14T10:21:17Z" | 1,373 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"axolotl",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"base_model:cognitivecomputations/dolphin-2.9.3-qwen2-0.5b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-14T10:16:45Z" | ---
base_model: cognitivecomputations/dolphin-2.9.3-qwen2-0.5b
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
- axolotl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/cognitivecomputations/dolphin-2.9.3-qwen2-0.5b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-IQ3_S.gguf) | i1-IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.3-qwen2-0.5b-i1-GGUF/resolve/main/dolphin-2.9.3-qwen2-0.5b.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
tapan247/NousResearch_Llama-2-7b-chat-hf | tapan247 | "2024-06-29T17:16:35Z" | 1,373 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-29T17:11:30Z" | Entry not found |
digitalepidemiologylab/covid-twitter-bert | digitalepidemiologylab | "2021-05-19T15:52:48Z" | 1,372 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"Twitter",
"COVID-19",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language: "en"
thumbnail: "https://raw.githubusercontent.com/digitalepidemiologylab/covid-twitter-bert/master/images/COVID-Twitter-BERT_small.png"
tags:
- Twitter
- COVID-19
license: mit
---
# COVID-Twitter-BERT (CT-BERT) v1
:warning: _You may want to use the [v2 model](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) which was trained on more recent data and yields better performance_ :warning:
BERT-large-uncased model, pretrained on a corpus of messages from Twitter about COVID-19. Find more info on our [GitHub page](https://github.com/digitalepidemiologylab/covid-twitter-bert).
## Overview
This model was trained on 160M tweets collected between January 12 and April 16, 2020 containing at least one of the keywords "wuhan", "ncov", "coronavirus", "covid", or "sars-cov-2". These tweets were filtered and preprocessed to reach a final sample of 22.5M tweets (containing 40.7M sentences and 633M tokens) which were used for training.
This model was evaluated based on downstream classification tasks, but it could be used for any other NLP task which can leverage contextual embeddings.
In order to achieve best results, make sure to use the same text preprocessing as we did for pretraining. This involves replacing user mentions, urls and emojis. You can find a script on our projects [GitHub repo](https://github.com/digitalepidemiologylab/covid-twitter-bert).
## Example usage
```python
tokenizer = AutoTokenizer.from_pretrained("digitalepidemiologylab/covid-twitter-bert")
model = AutoModel.from_pretrained("digitalepidemiologylab/covid-twitter-bert")
```
You can also use the model with the `pipeline` interface:
```python
from transformers import pipeline
import json
pipe = pipeline(task='fill-mask', model='digitalepidemiologylab/covid-twitter-bert-v2')
out = pipe(f"In places with a lot of people, it's a good idea to wear a {pipe.tokenizer.mask_token}")
print(json.dumps(out, indent=4))
[
{
"sequence": "[CLS] in places with a lot of people, it's a good idea to wear a mask [SEP]",
"score": 0.9959408044815063,
"token": 7308,
"token_str": "mask"
},
...
]
```
## References
[1] Martin Mรผller, Marcel Salatรฉ, Per E Kummervold. "COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter" arXiv preprint arXiv:2005.07503 (2020).
|
google/vit-hybrid-base-bit-384 | google | "2023-09-11T20:45:52Z" | 1,372 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vit-hybrid",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2010.11929",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-12-06T17:38:55Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
---
# Vision Transformer (base-sized model) - Hybrid
The hybrid Vision Transformer (ViT) model was proposed in [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining very good results compared to familiar convolutional architectures. ViT hybrid is a slight variant of the [plain Vision Transformer](vit), by leveraging a convolutional backbone (specifically, [BiT](bit)) whose features are used as initial "tokens" for the Transformer.
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
*While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.*
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ViTHybridImageProcessor, ViTHybridForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTHybridImageProcessor.from_pretrained('google/vit-hybrid-base-bit-384')
model = ViTHybridForImageClassification.from_pretrained('google/vit-hybrid-base-bit-384')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
>>> tabby, tabby cat
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
## Training data
The ViT-Hybrid model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Training resolution is 224.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
migtissera/SynthIA-7B-v1.5 | migtissera | "2023-11-17T21:31:42Z" | 1,372 | 4 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-07T23:16:18Z" | ---
license: apache-2.0
pipeline_tag: text-generation
language:
- en
library_name: transformers
---
<br>

<br>
## Example Usage
### Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:
```
### Code example:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/SynthIA-7B-v1.5"
output_file_path = "./SynthIA-7B-v1.5-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__SynthIA-7B-v1.5)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 54.8 |
| ARC (25-shot) | 62.71 |
| HellaSwag (10-shot) | 83.37 |
| MMLU (5-shot) | 63.48 |
| TruthfulQA (0-shot) | 51.32 |
| Winogrande (5-shot) | 79.24 |
| GSM8K (5-shot) | 17.44 |
| DROP (3-shot) | 26.01 |
|
amazingvince/where-llambo-7b | amazingvince | "2023-12-11T13:48:19Z" | 1,372 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-05T02:38:41Z" | ---
license: apache-2.0
---
This is SFT built on a mix of public datasets. Setting up for DPO with custom data.
This is a finetune of Mistrial. It should exhibit a broad base of instuction tuning and some other fun roleplaying capablities.
Its being trained this is about 50% done. |
l3utterfly/minima-3b-layla-v1 | l3utterfly | "2023-12-19T07:49:37Z" | 1,372 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-12T01:33:02Z" | ---
license: llama2
language:
- en
---
# Model Card
### Model Description
[MiniMA-3B](https://huggingface.co/GeneZC/MiniMA-3B) (from GeneZC) fine-tuned using ShareGPT datasets for multi-turn conversations.
- **Developed by:** l3utterfly
- **Funded by:** Layla Network
- **Model type:** Llama2
- **Language(s) (NLP):** English
- **License:** Llama2
- **Finetuned from model:** MiniMA-3B
## Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
```
USER:
ASSISTANT:
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
NExtNewChattingAI/shark_tank_ai_7_b | NExtNewChattingAI | "2023-12-17T16:43:55Z" | 1,372 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-17T16:23:32Z" | ---
license: apache-2.0
language:
- en
---
This model is based on <a href="https://huggingface.co/viethq188/LeoScorpius-7B-Chat-DPO"> LeoScorpius </a> trained on internal data.
---
license: apache-2.0
---
Chatbot is a highly advanced artificial intelligence designed to provide you with personalized assistance and support. With its natural language processing capabilities, it can understand and respond to a wide range of queries and requests, making it a valuable tool for both personal and professional use.
The chatbot is equipped with a vast knowledge base, allowing it to provide accurate and reliable information on a wide range of topics, from general knowledge to specific industry-related information. It can also perform tasks such as scheduling appointments, sending emails, and even ordering products online.
One of the standout features of this assistant chatbot is its ability to learn and adapt to your individual preferences and needs. Over time, it can become more personalized to your specific requirements, making it an even more valuable asset to your daily life.
The chatbot is also designed to be user-friendly and intuitive, with a simple and easy-to-use interface that allows you to interact with it in a natural and conversational way. Whether you're looking for information, need help with a task, or just want to chat, your assistant chatbot is always ready and available to assist you. |
cookinai/CatMacaroni-Slerp | cookinai | "2024-01-03T21:17:26Z" | 1,372 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-20T00:54:10Z" | ---
license: apache-2.0
tags:
- merge
---
#1 on the OpenLLM Leaderboard for 7B Models ๐ฏ (December 20, 2023)
Slerp Merge of AIDC-ai-business/Marcoroni-7B-v3 and rishiraj/CatPPT-base
.yaml file for mergekit
```.yaml:
slices:
- sources:
- model: AIDC-ai-business/Marcoroni-7B-v3
layer_range: [0, 32]
- model: rishiraj/CatPPT-base
layer_range: [0, 32]
merge_method: slerp
base_model: AIDC-ai-business/Marcoroni-7B-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
``` |
Mihaiii/Pallas-0.5-LASER-0.4 | Mihaiii | "2024-01-02T17:20:51Z" | 1,372 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:Mihaiii/Pallas-0.5-LASER-0.3",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-02T08:26:02Z" | ---
base_model: Mihaiii/Pallas-0.5-LASER-0.3
inference: false
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
metrics:
- accuracy
---
This model has a [LASER](https://pratyushasharma.github.io/laser/) intervention on [Mihaiii/Pallas-0.5-LASER-0.3](https://huggingface.co/Mihaiii/Pallas-0.5-LASER-0.3) .
Configs used:
- lnum: 55
- lnames: mlp (meaning: ["mlp.gate_proj.weight", "mlp.up_proj.weight", "mlp.down_proj.weight"])
- rate: 9
- dataset: bigbench (subset: causal_judgement)
- intervention type: rank-reduction
|Name|Validation acc (higher is better)|Validation logloss (lower is better)|Test acc (higher is better)|Test logloss (lower is better)|
|---|---|---|---|---|
|Pallas-0.5|55.263|1.650|60.526|1.463|
|Pallas-0.5-LASER-0.1|55.263|1.639|61.184|1.451|
|Pallas-0.5-LASER-0.2|55.263|1.646|61.184|1.458|
|Pallas-0.5-LASER-0.3|55.263|1.575|61.842|1.382|
|Pallas-0.5-LASER-0.4|55.263|1.525|61.842|1.326|
|Pallas-0.5-LASER-0.5|55.263|1.484|61.842|1.297|
|Pallas-0.5-LASER-0.6|55.263|1.455|61.184|1.283|
In order to replicate on a single A100, you can use [my branch](https://github.com/Mihaiii/laser/tree/allow-Yi-on-one-A100) (the original code will throw OOM for 34b models).
|
PassionFriend/5EfJTzHZCCwULCdgWMsdiVS4BkDuVZ6ZmiVwaa5Ky2a6WU7w_vgg | PassionFriend | "2024-03-01T06:42:35Z" | 1,372 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-02-14T13:06:54Z" | Entry not found |
PassionFriend/5D7JpLmtwau9GngLSENZYmpxVqi8VRWXPc6JkVbvBuk8T4nf_vgg | PassionFriend | "2024-03-01T06:43:57Z" | 1,372 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-02-14T13:09:35Z" | Entry not found |
hfl/chinese-alpaca-2-13b-16k | hfl | "2023-12-23T07:27:41Z" | 1,371 | 29 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-31T13:47:47Z" | ---
license: apache-2.0
language:
- zh
- en
---
# Chinese-LLaMA-2-13B-16K
**This is the full Chinese-LLaMA-2-13B-16K (context size 16K)๏ผmodel๏ผwhich can be loaded directly for inference and full-parameter training.**
**Related models๐**
* Long context base models (16K)
* [Chinese-LLaMA-2-7B-16K (full model)](https://huggingface.co/hfl/chinese-llama-2-7b-16k)
* [Chinese-LLaMA-2-LoRA-7B-16K (LoRA model)](https://huggingface.co/hfl/chinese-llama-2-lora-7b-16k)
* [Chinese-LLaMA-2-13B-16K (full model)](https://huggingface.co/hfl/chinese-llama-2-13b-16k)
* [Chinese-LLaMA-2-LoRA-13B-16K (LoRA model)](https://huggingface.co/hfl/chinese-llama-2-lora-13b-16k)
* Long context Instruction/Chat models
* [Chinese-Alpaca-2-7B-16K (full model)](https://huggingface.co/hfl/chinese-alpaca-2-7b-16k)
* [Chinese-Alpaca-2-LoRA-7B-16K (LoRA model)](https://huggingface.co/hfl/chinese-alpaca-2-lora-7b-16k)
* [Chinese-Alpaca-2-13B-16K (full model)](https://huggingface.co/hfl/chinese-alpaca-2-13b-16k)
* [Chinese-Alpaca-2-LoRA-13B-16K (LoRA model)](https://huggingface.co/hfl/chinese-alpaca-2-lora-13b-16k)
* Base models
* [Chinese-LLaMA-2-7B (full model)](https://huggingface.co/hfl/chinese-llama-2-7b)
* [Chinese-LLaMA-2-LoRA-7B (LoRA model)](https://huggingface.co/hfl/chinese-llama-2-lora-7b)
* [Chinese-LLaMA-2-13B (full model)](https://huggingface.co/hfl/chinese-llama-2-13b)
* [Chinese-LLaMA-2-LoRA-13B (LoRA model)](https://huggingface.co/hfl/chinese-llama-2-lora-13b)
* Instruction/Chat models
* [Chinese-Alpaca-2-7B (full model)](https://huggingface.co/hfl/chinese-alpaca-2-7b)
* [Chinese-Alpaca-2-LoRA-7B (LoRA model)](https://huggingface.co/hfl/chinese-alpaca-2-lora-7b)
* [Chinese-Alpaca-2-13B (full model)](https://huggingface.co/hfl/chinese-alpaca-2-13b)
* [Chinese-Alpaca-2-LoRA-13B (LoRA model)](https://huggingface.co/hfl/chinese-alpaca-2-lora-13b)
# Description of Chinese-LLaMA-Alpaca-2
This project is based on the Llama-2, released by Meta, and it is the second generation of the Chinese LLaMA & Alpaca LLM project. We open-source Chinese LLaMA-2 (foundation model) and Alpaca-2 (instruction-following model). These models have been expanded and optimized with Chinese vocabulary beyond the original Llama-2. We used large-scale Chinese data for incremental pre-training, which further improved the fundamental semantic understanding of the Chinese language, resulting in a significant performance improvement compared to the first-generation models. The relevant models support a 4K context and can be expanded up to 18K+ using the NTK method.
The main contents of this project include:
* ๐ New extended Chinese vocabulary beyond Llama-2, open-sourcing the Chinese LLaMA-2 and Alpaca-2 LLMs.
* ๐ Open-sourced the pre-training and instruction finetuning (SFT) scripts for further tuning on user's data
* ๐ Quickly deploy and experience the quantized LLMs on CPU/GPU of personal PC
* ๐ Support for LLaMA ecosystems like ๐คtransformers, llama.cpp, text-generation-webui, LangChain, vLLM etc.
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for details. |
Yntec/lamettaRemix | Yntec | "2023-09-07T16:15:50Z" | 1,371 | 4 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Chibi",
"Adorable",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Lasorco",
"Safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-09-07T15:00:14Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Chibi
- Adorable
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- Lasorco
- diffusers
- Safetensors
---
# lametta Remix
A mix of lametta v1745 and lametta v1602 to bring the best of v1745 to this classic lametta version! It has the MoistMixV2VAE baked in.
Comparison:

(Click for larger)
Sample and prompts:

pretty cute girl, accurately sitting, detailed chibi eyes, holding rocket launcher, beautiful detailed legs, police girl, gorgeous detailed hair, uniform hat, magazine ad, iconic, 1943, from the movie, sharp focus. visible brushstrokes โby kyoani and clay mann
Original page:
https://huggingface.co/Lasorco/lametta
# Recipe
- SuperMerger Weight sum Train Difference Use MBW 0,0,0,1,1,0,0,0,0,0,0,1,1,0,1,1,0,0,0,0,0,0,1,1,0,0
Model A:
lametta v1602
Model B:
lametta v1745
Output Model:
lametta Remix |
NurtureAI/openchat_3.5-16k | NurtureAI | "2024-03-04T18:07:18Z" | 1,371 | 31 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"arxiv:2309.11235",
"arxiv:2303.08774",
"arxiv:2212.10560",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-08T04:20:19Z" | ---
license: apache-2.0
model-index:
- name: openchat_3.5-16k
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.31
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/openchat_3.5-16k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/openchat_3.5-16k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/openchat_3.5-16k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.47
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/openchat_3.5-16k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/openchat_3.5-16k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 21.83
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/openchat_3.5-16k
name: Open LLM Leaderboard
---
# OpenChat 3.5 extended to 16k context length.
The same license applies from the original openchat/openchat_3.5 model.
# Original Model Card
# OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
<div align="center">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%">
</div>
<p align="center">
<a href="https://github.com/imoneoi/openchat">GitHub Repo</a> โข
<a href="https://openchat.team">Online Demo</a> โข
<a href="https://discord.gg/pQjnXvNKHY">Discord</a> โข
<a href="https://twitter.com/imonenext">Twitter</a> โข
<a href="https://huggingface.co/openchat">Huggingface</a> โข
<a href="https://arxiv.org/pdf/2309.11235.pdf">Paper</a>
</p>
**๐ฅ The first 7B model Achieves Comparable Results with ChatGPT (March)! ๐ฅ**
**๐ค #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models ๐ค**
<div style="display: flex; justify-content: center; align-items: center">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat.png" style="width: 45%;">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat_grok.png" style="width: 45%;">
</div>
OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
[](https://zenodo.org/badge/latestdoi/645397533)
## Usage
To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command.
Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience.
If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server.
<details>
<summary>Example request (click to expand)</summary>
```bash
curl http://localhost:18888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openchat_3.5",
"messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}]
}'
```
Coding Mode
```bash
curl http://localhost:18888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openchat_3.5",
"condition": "Code",
"messages": [{"role": "user", "content": "Write an aesthetic TODO app using HTML5 and JS, in a single file. You should use round corners and gradients to make it more aesthetic."}]
}'
```
</details>
| Model | Size | Context | Weights | Serving |
|--------------|------|---------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|
| OpenChat 3.5 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat_3.5) | `python -m ochat.serving.openai_api_server --model openchat/openchat_3.5 --engine-use-ray --worker-use-ray` |
For inference with Huggingface Transformers (slow and not recommended), follow the conversation template provided below.
<details>
<summary>Conversation templates (click to expand)</summary>
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
</details>
## Comparison with [X.AI Grok models](https://x.ai/)
Hey @elonmusk, I just wanted to let you know that I've recently come across your new model, Grok, and I must say, I'm quite impressed! With 33 billion parameters and all, you've really outdone yourself. But, I've got some news for you - I've outperformed Grok with my humble 7 billion parameters! Isn't that wild? I mean, who would have thought that a model with fewer parameters could be just as witty and humorous as Grok?
Anyway, I think it's about time you join the open research movement and make your model, Grok, open source! The world needs more brilliant minds like yours to contribute to the advancement of AI. Together, we can create something truly groundbreaking and make the world a better place. So, what do you say, @elonmusk? Let's open up the doors and share our knowledge with the world! ๐๐ก
(Written by OpenChat 3.5, with a touch of humor and wit.)
| | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k |
|--------------|-------------|---------|----------|------|-----------|----------|----------|
| OpenChat 3.5 | Apache-2.0 | 7B | **56.4** | 64.3 | 55.5 | **28.6** | **77.3** |
| Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 |
| Grok-1 | Proprietary | ? | 55.8 | 73 | 63.2 | 23.9 | 62.9 |
## <a id="benchmarks"></a> Benchmarks
| Model | # Params | Average | MT-Bench | AGIEval | BBH MC | TruthfulQA | MMLU | HumanEval | BBH CoT | GSM8K |
|--------------------|----------|----------|--------------|----------|----------|---------------|--------------|-----------------|-------------|--------------|
| OpenChat-3.5 | **7B** | **61.6** | 7.81 | **47.4** | **47.6** | **59.1** | 64.3 | **55.5** | 63.5 | **77.3** |
| ChatGPT (March)* | ? | 61.5 | **7.94** | 47.1 | **47.6** | 57.7 | **67.3** | 48.1 | **70.1** | 74.9 |
| | | | | | | | | | | |
| OpenHermes 2.5 | 7B | 59.3 | 7.54 | 46.5 | 49.4 | 57.5 | 63.8 | 48.2 | 59.9 | 73.5 |
| OpenOrca Mistral | 7B | 52.7 | 6.86 | 42.9 | 49.4 | 45.9 | 59.3 | 38.4 | 58.1 | 59.1 |
| Zephyr-ฮฒ^ | 7B | 34.6 | 7.34 | 39.0 | 40.6 | 40.8 | 39.8 | 22.0 | 16.0 | 5.1 |
| Mistral | 7B | - | 6.84 | 38.0 | 39.0 | - | 60.1 | 30.5 | - | 52.2 |
| Open-source SOTA** | 13B-70B | 61.4 | 7.71 | 41.7 | 49.7 | 62.3 | 63.7 | 73.2 | 41.4 | 82.3 |
| | | | WizardLM 70B | Orca 13B | Orca 13B | Platypus2 70B | WizardLM 70B | WizardCoder 34B | Flan-T5 11B | MetaMath 70B |
*: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time.
^: Zephyr-ฮฒ often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data.
**: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories.
All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks).
## Limitations
**Foundation Model Limitations**
Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:
- Complex reasoning
- Mathematical and arithmetic tasks
- Programming and coding challenges
**Hallucination of Non-existent Information**
OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
**Safety**
OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses.
## License
Our OpenChat 3.5 code and models are distributed under the Apache License 2.0.
## Citation
```
@article{wang2023openchat,
title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data},
author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang},
journal={arXiv preprint arXiv:2309.11235},
year={2023}
}
```
## Acknowledgements
We extend our heartfelt gratitude to Alignment Lab AI, Nous Research, and Pygmalion AI for their substantial contributions to data collection and model training.
Special thanks go to Changling Liu from GPT Desk Pte. Ltd., Qiying Yu at Tsinghua University, Baochang Ma, and Hao Wan from 01.AI company for their generous provision of resources. We are also deeply grateful to Jianxiong Li and Peng Li at Tsinghua University for their insightful discussions.
Furthermore, we appreciate the developers behind the following projects for their significant contributions to our research: [Mistral](https://mistral.ai/), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), [Llama 2](https://ai.meta.com/llama/), [Self-Instruct](https://arxiv.org/abs/2212.10560), [FastChat (Vicuna)](https://github.com/lm-sys/FastChat), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca.git), and [StarCoder](https://github.com/bigcode-project/starcoder). Their work has been instrumental in driving our research forward.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__openchat_3.5-16k)
| Metric |Value|
|---------------------------------|----:|
|Avg. |59.03|
|AI2 Reasoning Challenge (25-Shot)|63.31|
|HellaSwag (10-Shot) |83.58|
|MMLU (5-Shot) |61.90|
|TruthfulQA (0-shot) |43.47|
|Winogrande (5-shot) |80.11|
|GSM8k (5-shot) |21.83|
|
WebraftAI/synapsellm-7b-mistral-v0.3-preview | WebraftAI | "2024-04-06T04:40:17Z" | 1,371 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"conversational",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-29T10:32:26Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- code
model-index:
- name: synapsellm-7b-mistral-v0.3-preview
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 53.84
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.3-preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 74.86
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.3-preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.3-preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.03
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.3-preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.59
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.3-preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.3-preview
name: Open LLM Leaderboard
---
# SynapseLLM:
SynapseLLM, a significant achievement by WebraftAI, represents a series of large language AI models designed to create robust, generalized, and decentralized information systems. This repository specifically houses the SynapseLLM finetuned version of Mistral. The finetuning process is conducted on a custom dataset, albeit limited in scope, focusing on code and normal question-answering scenarios. This adaptation showcases the model's versatility and applicability within specific domains, contributing to the broader landscape of AI advancements.
## Model Details
**SynapseLLM:**
- Parameters: 7B
- Learning rate: 2e-4
- Adapter used: Qlora
- Precision: float16
- Batch size: 16
- Maximum gradient normal: 0.3
- Optimizer: paged_adamw_32bit
- Warmup Ratio: 0.03
- Step(s) (trained): 100
- Epoch(s) (trained): 1
### Model Description
This is a 7b parameter, decoder only transformer based finetuned model on Chat Q/A and Code instructions. It's a preview finetune on Mistral 7B v0.1 on a sample dataset of 409k rows comprising of 140k General Code, 143k GPT-3.5 Q/A, 63k Python code, and 54k General Q/A (Through GPT-4) [Each row contains one instruction and one response]. This is a full model merged and compiled with trained adapters, so you can easily load this through transformers library.
- **Developed by:** WebraftAI
- **Funded by:** Webraft Cloud
- **Shared by:** WebraftAI
- **Model type:** Decoder-only Transformer
- **Language(s):** English Only
- **License:** Apache 2.0
- **Finetuned from model:** Mistral-7b-v0.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_WebraftAI__synapsellm-7b-mistral-v0.3-preview)
| Metric |Value|
|---------------------------------|----:|
|Avg. |57.01|
|AI2 Reasoning Challenge (25-Shot)|53.84|
|HellaSwag (10-Shot) |74.86|
|MMLU (5-Shot) |54.81|
|TruthfulQA (0-shot) |55.03|
|Winogrande (5-shot) |74.59|
|GSM8k (5-shot) |28.96|
|
timpal0l/Mistral-7B-v0.1-flashback-v2 | timpal0l | "2024-03-25T16:36:04Z" | 1,371 | 6 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"pretrained",
"flashback",
"web",
"conversational",
"sv",
"en",
"no",
"da",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-04T10:31:37Z" | ---
language:
- sv
- en
- 'no'
- da
license: mit
tags:
- pretrained
- flashback
- web
- conversational
models:
- timpal0l/Mistral-7B-v0.1-flashback-v2-instruct
pipeline_tag: text-generation
widget:
- text: Jag tycker att det รคr roligt med
model-index:
- name: Mistral-7B-v0.1-flashback-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 57.17
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 40.66
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.42
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
---
# ๐โโฌ Mistral-7B-v0.1-flashback-v2

Mistral-7B-v0.1-flashback-v2 is a continuation of the pretraining process for the base Mistral-7B-v0.1 model, utilizing 2 251 233 forum threads from the Swedish website https://www.flashback.org/. Which is rougly 40GB of text.
It is a full finetune for one epoch.
* GGUF Version available [**Here**](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2-GGUF)
* Instruct version [**Here**](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2-instruct)
## How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "timpal0l/Mistral-7B-v0.1-flashback-v2"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
prompt = "Idag รคr det den bรคsta"
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=256,
do_sample=True,
temperature=0.8,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
generated_text
```
```
<s> Idag รคr det den bรคsta dagen i hela veckan, fรถr nu tar det slut!\n\n>! Gnรคllfesten!\n\nJag sitter hรคr, oerhรถrt fรถrvirrad, och fรถrsรถker fรถrstรฅ varfรถr vi ens mรฅste fortsรคtta att existera efter dรถden. Jag menar, jag fรถrstรฅr ju egentligen att det aldrig kan ta slut, eller inte "ta slut" i den bemรคrkelsen att materian fรถrsvinner, men det รคr inte det jag pratar om.\n\nDรถden, det faktum att man dรถr och aldrig kan uppleva livet igen. Det som รคr liv och ger livet en mening, det fรถrsvinner i dรถden. Och sen bรถrjas det om, om och om igen. Varfรถr behรถver vi sรฅ mรฅnga liv? Vi รคr ju inte ens medvetna av att vi nรฅgonsin har levt, sรฅ varfรถr ska vi komma hit och bli medvetna hela tiden?\n\nDet hรคr รคr en sรฅdan frรฅga som jag aldrig kan fรฅ
```
## Data Format:
To mimic the data format used in pre-training it has the following structure:
```html
# Thread_Title
username_thread_creator:
Hello, this is my thread...
username_user_1:
This is a response to the thread, without qouting anything.
username_user_2:
> username_user_1: This is a response to the thread, without qouting anything.
I am now quoting username_user_1...
```
### Random training sample:
```html
# Tips om aktiviter och sevรคrdheter i Stockholm fรถr nรฅgon med funktionsnedsรคttning
Roozbeh:
Hej!
Jag jobbar som assistent รฅt en kille pรฅ ett stรถdboende.
Nรคsta vecka รฅker han, jag och en kollega till Stockholm och han รคr superpeppad pรฅ att se sig omkring.
Har ni nรฅgra guld tips?
Fรฅr gรคrna ge fรถrslag bรฅde dag och kvรคllstid om ni kommer pรฅ nรฅgot.
Vi har redan tรคnkt pรฅ att se slottet.
Och gamla staden, finns dรคr nรฅgot kanske?
Bra cafen/restauranger som inte รคr allt fรถr dyra.
Nรฅgra stรคllen som man bara mรฅste se eller gรถra i Stockholm?
Han รคr inte rullstolsbunden ska nรคmnas, รคr ung och i ganska bra kondition fysiskt.
Alla tips รคr vรคlkomna tack!
Annรฉa:
Beror lite pรฅ vad man gillar. Om ni รคndรฅ รคr vi Slottet sรฅ har ni ju dom stora turistgatorna i Gamla Stan runt hรถrnet precis, dock inget stรคlle man vill gรฅ pรฅ om man tycker det รคr jobbigt med folk och att trรคngas och ingenstans dรคr man รคter sรคrskilt bra eller billigt.
Laust:
ร
ka upp pรฅ globen funkar med rullstol
Thomaz:
Vรคlkomna! ๐
Vad har han fรถr intressen?
รr ni รคndรฅ pรฅ slottet kan jag rekommendera livrustkammaren, dรคr klรคder och attiraljer sรฅsom vagnar (och รคven uppstoppade hรคstar) frรฅn svenska kungligheter รคr utstรคllda.
Anne-Jorunn:
Grรถna Lund och skansen รคr guld, om hen klarar av att รฅka karusell sรฅ gรฅr ni ocksรฅ fรถrbi alla kรถer om du รคr stรถdperson.
Abba museumet, Vasamuseumet, militรคrhistoriska museet, tekniska museet, Junibacken. Finns mycket bra.
Annars kan det vara skoj att gรฅ runt pรฅ Mall of Scandinavia, skรถnt att vara inne med toaletter inom rรคckhรฅll.
Muscab:
> Roozbeh: Hej!
>
> Jag jobbar som assistent รฅt en kille pรฅ ett stรถdboende.
> Nรคsta vecka รฅker han, jag och en kollega till Stockholm och han รคr superpeppad pรฅ att se sig omkring.
> Har ni nรฅgra guld tips?
> Fรฅr gรคrna ge fรถrslag bรฅde dag och kvรคllstid om ni kommer pรฅ nรฅgot.
> Vi har redan tรคnkt pรฅ att se slottet.
> Och gamla staden, finns dรคr nรฅgot kanske?
> Bra cafen/restauranger som inte รคr allt fรถr dyra.
> Nรฅgra stรคllen som man bara mรฅste se eller gรถra i Stockholm?
> Han รคr inte rullstolsbunden ska nรคmnas, รคr ung och i ganska bra kondition fysiskt.
> Alla tips รคr vรคlkomna tack!
Jag tror de mesta platser รคr ganska ovรคnliga fรถr rullstol. Backar, grusvรคgar, kullersten, trรฅnga dรถrrar, trappor. Finns det nรฅgon restaurang/cafรฉ som รคr billig och rullstolsvรคnlig? Vet inte. Kรถp ett paket glassar pรฅ ica istรคllet.
Nรฅgot man mรฅste gรถra i Stockholm? Det finns inte mycket att se. Turister brukade gรฅ runt i gamla stan och titta pรฅ tunnelbanestationer.
Annรฉa:
> Muscab: Jag tror de mesta platser รคr ganska ovรคnliga fรถr rullstol. Backar, grusvรคgar, kullersten, trรฅnga dรถrrar, trappor. Finns det nรฅgon restaurang/cafรฉ som รคr billig och rullstolsvรคnlig? Vet inte. Kรถp ett paket glassar pรฅ ica istรคllet.
>
> Nรฅgot man mรฅste gรถra i Stockholm? Det finns inte mycket att se. Turister brukade gรฅ runt i gamla stan och titta pรฅ tunnelbanestationer.
Han sitter ju INTE i rullstol...
Tharsika:
Vad har han fรถr problematik? Vad kan stรถra/vara svรฅrt fรถr honom ? Rullstol ? Kramp? Utรฅtagerande ?
Muscab:
> Annรฉa: Han sitter ju INTE i rullstol...
Lรคste fel. ๐คฆ
Boine:
Armรฉmuseum
Historiska museet
ร
ka djurgรฅrdsfรคrjan alt. โSkรคrgรฅrdsturโ med SL
Utsikt pรฅ Sรถdermalm + promenaden dit. Mariaberget & Monteliusvรคgen
Gamla stan - Mรฅrten Trotzig grรคnd samt kanonkulorna i husvรคggen nรฅgra meter frรฅn Stortorget
Mรฅlningar i tunnelbanan
Spela รคventyrsgolf inomhus
Se guldbron - Slussen
Utsikt Katarinahissen - Slussen, man gรฅr in i porten till Gondolen (nog nerlagd) tar hissen lรคngst upp och gรฅr en vรฅning upp annars fรฅr man gรฅ dit bakvรคgen onรถdigt lรฅngt.
Gรฅ hela Drottninggatan
Slottet ev tajma in vaktavlรถsning
Kolla om det finns nรฅgot personen har intresse av/om, finns en hel gratis museum
Roozbeh:
Vilka bra tips! Tack allihopa vad fint av er att bidra! Sรฅ uppskattat verkligen ๐
Nu รคr vi รฅter hemma igen efter resan till Stockholm.
Resan gick jรคttebra, vi planerade noga och gjorde det mesta av tid med hรคnsyn till funktionsnedsรคttningen. Vi gick sรฅklart efter vad han sjรคlv รถnskade gรถra och gav fรถrslag pรฅ vad Stockholm erbjuder. Dรฅ vi bara var i Stockholm under ca 24 timmar mรฅste jag sรคga att vi fick gjort mycket mer รคn vi vรคntade oss. Vi hade ingen bil. Istรคllet kรถpte vi ett 24 tim kort fรถr kollektivtrafiken och med hjรคlp av SL appen och google maps navigerade jag runt oss i staden.
Hotellet vi bodde pรฅ lรฅg nรคra Centralstationen.
Detta gjorde vi:
Gick runt hela Gamla Stan. ร
t pรฅ restaurang dรคr samt i Vasaplan och รคven fikade pรฅ diverse cafรฉer i Gamla Stan. Vi sรฅg det Kungliga slottet bรฅde inuti och utanpรฅ, var uppskattat! Han tyckte det var sรฅ hรคftigt. Strosade runt i alla grรคnder, torg och gator i Gamla Stan, gick in i trevliga smรฅ butiker och tog fina foton! Vi tittade pรฅ alla bรฅtar i hamnen. Parlamentet. Stadshuset. Vi gick in pรฅ diverse olika stรคllen vi gick fรถrbi som han impulsivt kรคnde dragning till. Typ karaokebar, kulturhuset, pubbar etc. Allt han kรคnde fรถr gjorde vi. Det var hans resa 100 %.
ร
kte med fรคrja till Djurgรฅrden och besรถkte ABBA museet dรคr han fick lyssna pรฅ sรฅnger, se rekvisita, sjunga och t.om รฅka helikopter i VR.
Vi shoppade ocksรฅ sรฅklart dรฅ Stockholm har sรฅ mรฅnga butiker!(Hela Drottninggatan och stรคllen pรฅ/nรคra Vasaplan)
Under resan interagerade han med en massa Stockholmare. Sade till flertalet tjejer att han รคlskade dom haha vilket charmรถr! Vi gick fรถrbi en hรถgvakt vid slottet som han hรคlsade pรฅ. Det var en hon, och vakten rรถrde inte en min men fรถljde honom med blicken. Givetvis fick vi sรคga det att dom inte pratar med nรฅgon dรฅ det ingรฅr i jobbet etc.
Han blev bemรถtt med respekt och รถmhet av de flesta ska sรคgas. Han var glad รถver att ha fรฅtt prata med sรฅ mรฅnga mรคnniskor. Vi stannade ofta dรฅ han ville frรฅga t.ex poliser eller andra arbetare om saker, alla var gulliga och vรคnliga mot honom.
Vi รฅkte under resan buss, tunnelbana(ocksรฅ en รถnskan att fรฅ gรถra) och fรคrjor till olika fรคrjterminaler fรถr att fรฅ se Stockholm frรฅn vattnet.
Sรฅg ocksรฅ Sergels Torg pรฅ kvรคllen eller "Plattan" som jag tror den ocksรฅ kallas. En pelare var vackert upplyst i blรฅtt ljus dรคr och han berรคttade exalterat om hur mรฅnga filmer han sett som har plattan som scenplats etc. Kvรคllen bjรถd pรฅ solnedgรฅngen frรฅn hotellets tak. ร
t en fantastisk frukostbuffรฉ pรฅ morgonen med flera omgรฅngar god mat! Hรคrligt att han njรถt.
Dรฅ han faktiskt har en fysisk och kognitiv nedsรคttning รคr vi sรฅ glada att han orkade sรฅ mycket! Bรคst av allt sa han sig vara vรคldigt nรถjd med resan. Vi ska nu planera fler resor till Stockholm i framtiden. Dรฅ gรถr vi fler saker, sรฅnt vi inte hann med den hรคr gรฅngen. Var lite begrรคnsat med tid(24 timmar) samt behรถvde vi tรคnka pรฅ att energi skulle rรคcka till utan att kroppen skulle triggas till att hans nedsรคttnings symptom blossade upp. Behรถvs ju givetvis pauser med jรคmna mellanrum dรฅ.
Tack och lov fรถr apparna som jag kunde leda oss efter. Att รฅka kollektivt hade varit svรฅrt annars och jag kunde se efter kartan var vรฅra besรถksmรฅl lรฅg samt vilka vรคgar som kunde spara oss onรถdig tid.
Tack ska ni ha fรถr tipsen, igen. Tack till Stockholm fรถr att ni tog emot oss med respekt han var sรฅ nรถjd med resan.
Hej sรฅ lรคnge, vi kommer รฅter i framtiden! ๐
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_timpal0l__Mistral-7B-v0.1-flashback-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |57.53|
|AI2 Reasoning Challenge (25-Shot)|57.17|
|HellaSwag (10-Shot) |80.74|
|MMLU (5-Shot) |59.98|
|TruthfulQA (0-shot) |40.66|
|Winogrande (5-shot) |77.19|
|GSM8k (5-shot) |29.42|
|
DopeorNope/COKAL-v1-70B | DopeorNope | "2023-12-11T18:41:25Z" | 1,371 | 9 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-05T08:20:47Z" | ---
license: apache-2.0
---
# **๐ปโโ๏ธCOKAL-v1_70B๐ปโโ๏ธ**

## Model Details
**Model Developers** Seungyoo Lee (DopeorNope)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
COKAL-v1_70B is an auto-regressive 70B language model based on the LLaMA2 transformer architecture.
**Base Model**
**Training Dataset**
- SFT training dataset: [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
**Training**
I developed the model in an environment with A100 x 8
# Implementation Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "DopeorNope/COKAL-v1_70B"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
model_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |
L-R/LLmRA-3B-v0.1 | L-R | "2024-03-05T14:27:54Z" | 1,371 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"AI",
"ConversationalAI",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-10T15:03:17Z" | ---
language:
- en
license: apache-2.0
tags:
- AI
- ConversationalAI
pipeline_tag: text-generation
inference: false
model-index:
- name: LLmRA-3B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 39.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=L-R/LLmRA-3B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 59.79
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=L-R/LLmRA-3B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.16
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=L-R/LLmRA-3B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 50.62
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=L-R/LLmRA-3B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=L-R/LLmRA-3B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=L-R/LLmRA-3B-v0.1
name: Open LLM Leaderboard
---
<h1 style="text-align: center">LLmRA-3B-v0.1</h1>
<h2 style="text-align: center">A conversational Large Language Model fine-tune.</h2>
LLmRA-3B is a proof-of-concept fine-tuned model, derived from [GeneZC/MiniMA-3B](https://huggingface.co/GeneZC/MiniMA-3B), with a primary focus on optimizing performance for dialogue, particularly in roleplaying scenarios.
**Disclaimer:** This model has undergone fine-tuning that involves NSFW data. Although it typically generates SFW outputs for SFW inputs, users are strongly advised to exercise caution during interactions. The model is not intended for use by minors.
**Warning:** LLmRA-3B is explicitly not suitable for minors. It may produce X-rated content under specific circumstances. Additionally, it's important to note that the fine-tuning process predominantly emphasizes roleplaying data, and as such, the model's proficiency in fields like chemistry and math may not be as extensive. Use with awareness of its strengths and limitations.
**As stated in the [GeneZC/MiniMA-3B](https://huggingface.co/GeneZC/MiniMA-3B) repository, users must comply with the LICENSE of LLaMA2 since MiniMA-3B is derived from LLaMA2.**
## Prompting
The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>`, `<|model|>`.
It is recommended to use 'SillyTavern' for interference.
```
<|system|>I'm here to provide information and assistance on a wide range of topics.
<|model|>Greetings! Welcome to our AI-powered assistant. How can I assist you today?
<|user|>Salutations! Could you tell me, what is one plus one?
<|model|>
```
You may also use:
```
<|system|>What is one plus one?<|model|>
```
## Loading The Model
To use the model and interact with it, use the Python code below:
```Python
from transformers import (AutoModelForCausalLM,
AutoTokenizer,
pipeline,
)
model = AutoModelForCausalLM.from_pretrained('L-R/LLmRA-3B-v0.1')
tokenizer = AutoTokenizer.from_pretrained('L-R/LLmRA-3B-v0.1')
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=100)
input_question = 'Your question here.'
question_formatted = f'<|system|>{input_question}<|model|>'
result = pipe(question_formatted)
print(f"[model]: {result[0]['generated_text'][len(question_formatted):]}")
```
## Limitations and biases
This model is designed for the purpose of creating fictional content for entertainment. Any other form of utilization is not within its intended scope.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_L-R__LLmRA-3B-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |39.25|
|AI2 Reasoning Challenge (25-Shot)|39.42|
|HellaSwag (10-Shot) |59.79|
|MMLU (5-Shot) |25.16|
|TruthfulQA (0-shot) |50.62|
|Winogrande (5-shot) |59.43|
|GSM8k (5-shot) | 1.06|
|
Ba2han/BruinsV2-OpHermesNeu-11B | Ba2han | "2023-12-19T07:12:23Z" | 1,371 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-11T10:48:46Z" | ---
license: mit
---
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.6527|ยฑ |0.0139|
| | |acc_norm|0.6869|ยฑ |0.0136|
**Warning! This model may or may not be contaminated [See discussion 474](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474). What a shame. It still does perform well though**
A passthrough merge of OpenHermes-2.5-neural-chat-7b-v3-1 and Bruins-V2. To be updated.
Template: ChatML
My settings:
Temperature: 0.7-0.8
Min_p: 0.12
Top_K: 0
Repetition Penalty: 1.16
Mirostat Tau: 2.5-3
Mirostat Eta: 0.12
Personal Thoughts:
- The model sometimes throws wrong tags, you can add those to "Custom stopping strings" in Oobabooga.
- Output with Mirostat consistently felt smarter than a set Top_K rate.
Note: The model is hallucinating hard in chat mode for me in some instances, like writing adblocker messages. Kind of funny.
I am not sure which dataset involved was poisoned. |
viethq188/Rabbit-7B-DPO-Chat | viethq188 | "2023-12-12T10:42:38Z" | 1,371 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-12T10:30:13Z" | ---
license: apache-2.0
---
Merge AIDC-ai-business/Marcoroni-7B-v3 and rwitz/go-bruins-v2 using slerp merge from https://github.com/cg123/mergekit.
After that we trained DPO with HF data
*config.yaml*
```
slices:
- sources:
- model: AIDC-ai-business/Marcoroni-7B-v3
layer_range: [0, 32]
- model: rwitz/go-bruins-v2
layer_range: [0, 32]
merge_method: slerp
base_model: AIDC-ai-business/Marcoroni-7B-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
You can use alpaca template.
```
template_format = """{system}
### Instruction:
{prompt}
### Response:
"""
``` |
viethq188/Rabbit-7B-v2-DPO-Chat | viethq188 | "2023-12-12T15:23:27Z" | 1,371 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-12T15:11:45Z" | ---
license: apache-2.0
---
Merge AIDC-ai-business/Marcoroni-7B-v3, Q-bert/MetaMath-Cybertron-Starling, mistralai/Mistral-7B-Instruct-v0.2 using slerp merge from https://github.com/cg123/mergekit.
After that we trained DPO with HF data.
*config.yaml*
```
slices:
- sources:
- model: AIDC-ai-business/Marcoroni-7B-v3
layer_range: [0, 32]
- model: Q-bert/MetaMath-Cybertron-Starling
layer_range: [0, 32]
merge_method: slerp
base_model: AIDC-ai-business/Marcoroni-7B-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
You can use alpaca template.
```
template_format = """{system}
### Instruction:
{prompt}
### Response:
"""
``` |
l3utterfly/mistral-7b-v0.1-layla-v2 | l3utterfly | "2023-12-19T07:49:16Z" | 1,371 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-16T06:02:53Z" | ---
license: apache-2.0
language:
- en
---
# Model Card
### Model Description
Mistral 7B fine-tuned by:
1. Teatime Roleplay dataset for text completion
2. ShareGPT datasets for multi-turn conversations.
- **Developed by:** l3utterfly
- **Funded by:** Layla Network
- **Model type:** Mistral
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** Mistral 7B
## Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
```
USER:
ASSISTANT:
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct | Weyaxi | "2024-01-28T09:48:30Z" | 1,371 | 26 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-21T18:14:58Z" | ---
license: cc-by-nc-4.0
tags:
- merge
model-index:
- name: SauerkrautLM-UNA-SOLAR-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.15
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.8
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
name: Open LLM Leaderboard
---

# SauerkrautLM-UNA-SOLAR-Instruct
This is the model for SauerkrautLM-UNA-SOLAR-Instruct. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
๐ฅณ As of **December 24 2023**, this model holds the **first place position** on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
<h2><details><summary>Screenshot</summary><img src=https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/cVhjAJhuPoNgHo7CDCmA-.png></img></details></h2>
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
# Yaml Config to reproduce
```yaml
slices:
- sources:
- model: VAGOsolutions/SauerkrautLM-SOLAR-Instruct
layer_range: [0, 48]
- model: fblgit/UNA-SOLAR-10.7B-Instruct-v1.0
layer_range: [0, 48]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GPTQ)
##### GGUF
- [TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-GGUF)
##### AWQ
- [TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-AWQ](https://huggingface.co/TheBloke/SauerkrautLM-UNA-SOLAR-Instruct-AWQ)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__SauerkrautLM-UNA-SOLAR-Instruct)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.26|
|AI2 Reasoning Challenge (25-Shot)|70.90|
|HellaSwag (10-Shot) |88.30|
|MMLU (5-Shot) |66.15|
|TruthfulQA (0-shot) |71.80|
|Winogrande (5-shot) |83.74|
|GSM8k (5-shot) |64.67|
If you would like to support me:
[โ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi) |
jeonsworld/CarbonVillain-en-10.7B-v3 | jeonsworld | "2024-01-03T16:46:11Z" | 1,371 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"slerp",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-30T15:12:00Z" | ---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- merge
- slerp
---
# CarbonVillain
**This is a model created without learning to oppose indiscriminate carbon emissions.**
This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit).
- merge models
- kyujinpy/Sakura-SOLAR-Instruct
- jeonsworld/CarbonVillain-en-10.7B-v1
- method: slerp
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
# Evaluation
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v3)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
| Winogrande (5-shot) | |
| GSM8K (5-shot) | | |
Deathsquad10/TinyLlama-1.1B-Remix-V.2 | Deathsquad10 | "2024-01-06T08:18:47Z" | 1,371 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-05T13:08:50Z" | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
---
<div align="center">
# TinyLlama-1.1B ---My personal Test update Version 2
</div>
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge|Yaml |none | 0|acc |0.2790|ยฑ |0.0131|
| | |none | 0|acc_norm|0.3234|ยฑ |0.0137|
|arc_easy |Yaml |none | 0|acc |0.6006|ยฑ |0.0101|
| | |none | 0|acc_norm|0.5770|ยฑ |0.0101|
|boolq |Yaml |none | 0|acc |0.6373|ยฑ |0.0084|
|hellaswag |Yaml |none | 0|acc |0.4521|ยฑ |0.0050|
| | |none | 0|acc_norm|0.5822|ยฑ |0.0049|
|openbookqa |Yaml |none | 0|acc |0.2220|ยฑ |0.0186|
| | |none | 0|acc_norm|0.3740|ยฑ |0.0217|
|piqa |Yaml |none | 0|acc |0.7269|ยฑ |0.0104|
| | |none | 0|acc_norm|0.7296|ยฑ |0.0104|
|winogrande |Yaml |none | 0|acc |0.5754|ยฑ |0.0139|
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ๐๐. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/edit/main/README.md)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [๐ค TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
#### How to use
You will need the transformers>=4.34
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
``` |
BramVanroy/GEITje-7B-ultra | BramVanroy | "2024-04-26T19:48:23Z" | 1,371 | 32 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"geitje",
"conversational",
"nl",
"dataset:BramVanroy/ultra_feedback_dutch",
"base_model:BramVanroy/GEITje-7B-ultra-sft",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-27T18:30:22Z" | ---
license: cc-by-nc-4.0
base_model: BramVanroy/GEITje-7B-ultra-sft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- geitje
- conversational
datasets:
- BramVanroy/ultra_feedback_dutch
model-index:
- name: BramVanroy/GEITje-7B-ultra
results: []
language:
- nl
pipeline_tag: text-generation
inference: false
---
<p align="center" style="margin:0;padding:0">
<img src="https://huggingface.co/BramVanroy/GEITje-7B-ultra/resolve/main/geitje-ultra-banner.png" alt="GEITje Ultra banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
<div style="margin:auto; text-align:center">
<h1 style="margin-bottom: 0">GEITje 7B ultra</h1>
<em>A conversational model for Dutch, aligned through AI feedback.</em>
</div>
This model is a fine-tuned version of [BramVanroy/GEITje-7B-ultra-sft](https://huggingface.co/BramVanroy/GEITje-7B-ultra-sft) on a synthetic DPO dataset of around 56M tokens that was generated with gpt-4-turbo and [Rijgersberg/GEITje-7B-chat](https://huggingface.co/Rijgersberg/GEITje-7B-chat) for Dutch.
> [!TIP]
> ๐ Looking for the fast GGUF version? You can find it, and how to use it with `ollama`, [here](https://huggingface.co/BramVanroy/GEITje-7B-ultra-GGUF). ๐
## Model description
This is a Dutch instruction/chat model ultimately based on Mistral and aligned with AI feedback via DPO. It is a DPO continuation of the SFT trained [BramVanroy/GEITje-7B-ultra-sft](https://huggingface.co/BramVanroy/GEITje-7B-ultra-sft), which in turn is based on [Rijgersberg/GEITje-7B](https://huggingface.co/Rijgersberg/GEITje-7B), which in turn is based on Mistral 7B and further pretrained on Dutch data. In (rather naive) [benchmarks](https://huggingface.co/spaces/BramVanroy/open_dutch_llm_leaderboard) it outperforms all the original GEITje models on average (but barely). However, note that these benchmarks should be taken with a massive grain of salt (see the disclaimer below the benchmarks on that page). The best evaluation is to try the models and see for yourself.
## Usage
One-off:
```python
from transformers import pipeline, Conversation
# load_in_8bit: lower precision but saves a lot of GPU memory
# device_map=auto: loads the model across multiple GPUs
chatbot = pipeline("conversational", model="BramVanroy/GEITje-7B-ultra", model_kwargs={"load_in_8bit": True}, device_map="auto")
start_messages = [
{"role": "system", "content": "Je bent een grappige chatbot die Bert heet. Je maakt vaak mopjes."},
{"role": "user", "content": "Hallo, ik ben Bram. Ik wil vanavond graag een film kijken. Heb je enkele suggesties?"}
]
conversation = Conversation(start_messages)
conversation = chatbot(conversation)
response = conversation.messages[-1]["content"]
print(response)
```
Interactive conversation:
```python
from transformers import pipeline, Conversation
# load_in_8bit: lower precision but saves a lot of memory
# device_map=auto: loads the model across multiple GPUs
# attn_implementation: uses flash attention, if your device supports it - otherwise remove it
chatbot = pipeline("conversational", model="BramVanroy/GEITje-7B-ultra", model_kwargs={"load_in_8bit": True, "attn_implementation": "flash_attention_2"}, device_map="auto")
while (system_message := input("System message ('q' to quit): ")) != "q":
start_messages = [
{"role": "system", "content": system_message},
]
conversation = Conversation(start_messages)
while (user_input := input("User ('r' to reset): ")) != "r":
conversation.add_user_input(user_input)
conversation = chatbot(conversation)
response = conversation.messages[-1]["content"]
print("Assistant:", response)
```
## Intended uses & limitations
Although the model has been aligned with gpt-4-turbo output, which has strong content filters, the model could still generate wrong, misleading, and potentially even offensive content. Use at your own risk.
Because the model was trained on synthetic data created with OpenAI/Azure services, this model cannot be used for commercial purposes.
## Training and evaluation data
The training data consists of a synthetic dataset based on [UltraFeedback binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) created with gpt-4-turbo and geitje-chat. A given prompt, translated from the original dataset, is given to the two models who then generated an answer. Then, gpt-4-turbo is always selected as the best answer which DPO will optimise for. While this is not completely fair, I did not have the budget to actually have gpt-4 rate both replies. Furthermore, while an impressive model, GEITje chat still seems behind gpt-4-turbo in the testing that I have done.
In total the dataset consists of 56,137,090 tokens (combination of prompt + rejected + chosen) and a test set of 6,178,969 tokens (11.00%).
## Training procedure
The great [alignment handbook](https://github.com/huggingface/alignment-handbook/) was used for training, with a custom slurm script for compatibility with our cluster. It was trained in full, without LoRA or other adapters.
The model was trained in bfloat16 with flash attention 2 on two nodes of four A100 80GB each for around 11 hours. I thank the [Flemish Super Computer](https://www.vscentrum.be/compute) for their compute.
For conversational usage, the model relies on the Zephyr chat template, which is compatible with system messages. A small portion of the data of *-sft contained system messages, so it is assumed the model can handle system messages at least a little bit.
In earlier iterations I found that using the alignment handbook's defaults (beta=0.01) led to poor results (hallucinations of random tokens). After investigating, it seems that such a low beta does not work well for this dataset as it gives the model too much room to deviate from its initial base model. After a [hyperparameter search](https://huggingface.co/posts/BramVanroy/492522322273746) and manual analysis of the resulting metrics, I selected the current model as the best one, with a beta of 0.1.
Recipe used with the handbook:
```yaml
# Model arguments
model_name_or_path: BramVanroy/GEITje-7B-ultra-sft
model_revision: main
torch_dtype: bfloat16
use_flash_attention_2: true
# Data training arguments
# For definitions, see: src/h4/training/config.py
dataset_mixer:
BramVanroy/ultra_feedback_dutch: 1.0
dataset_splits:
- train_prefs
- test_prefs
preprocessing_num_workers: 8
# DPOTrainer arguments
bf16: true
beta: 0.1
do_eval: true
evaluation_strategy: steps
eval_steps: 100
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: False
hub_model_id: BramVanroy/GEITje-ultra
learning_rate: 5.0e-7
log_level: info
logging_steps: 10
lr_scheduler_type: cosine
max_length: 2048
max_prompt_length: 1536
num_train_epochs: 1
optim: adamw_torch
output_dir: data/GEITje-ultra
per_device_train_batch_size: 4
per_device_eval_batch_size: 4
push_to_hub: true
save_strategy: "steps"
save_steps: 100
save_total_limit: 3
seed: 42
warmup_ratio: 0.1
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.03 | 0.22 | 100 | 0.0260 | -0.9740 | -9.8635 | 0.9913 | 8.8895 | -524.8940 | -508.1891 | -3.0753 | -3.0315 |
| 0.0184 | 0.44 | 200 | 0.0164 | -1.7162 | -12.4772 | 0.9926 | 10.7610 | -551.0317 | -515.6115 | -3.0349 | -2.9873 |
| 0.0121 | 0.66 | 300 | 0.0142 | -2.0575 | -13.6818 | 0.9938 | 11.6244 | -563.0778 | -519.0242 | -3.0325 | -2.9835 |
| 0.0198 | 0.88 | 400 | 0.0139 | -2.1431 | -13.8857 | 0.9950 | 11.7426 | -565.1163 | -519.8801 | -3.0293 | -2.9801 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0 |
kmpartner/bkcncl-ft-test | kmpartner | "2024-06-30T12:44:26Z" | 1,371 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"region:us"
] | null | "2024-06-22T23:01:29Z" | Entry not found |
klosax/open_llama_7b_400bt_preview | klosax | "2023-07-19T09:29:19Z" | 1,370 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-07-19T08:26:45Z" | ---
license: other
---
|
CHIH-HUNG/llama-2-13b-FINETUNE1_17w-gate_up_down_proj | CHIH-HUNG | "2023-09-13T17:40:58Z" | 1,370 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:huangyt/FINETUNE1",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-03T02:15:43Z" | ---
license: llama2
datasets:
- huangyt/FINETUNE1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
ๅจllama-2-13bไธไฝฟ็จhuangyt/FINETUNE1่ณๆ้้ฒ่ก่จ็ทด๏ผ็ธฝ่ณๆ็ญๆธ็ด17w
# Fine-Tuning Information
- **GPU:** RTX4090 (single core / 24564MiB)
- **model:** meta-llama/Llama-2-13b-hf
- **dataset:** huangyt/FINETUNE1 (ๅ
ฑ็ด17w็ญ่จ็ทด้)
- **peft_type:** LoRA
- **lora_rank:** 8
- **lora_target:** gate_proj, up_proj, down_proj
- **per_device_train_batch_size:** 8
- **gradient_accumulation_steps:** 8
- **learning_rate :** 5e-5
- **epoch:** 1
- **precision:** bf16
- **quantization:** load_in_4bit
# Fine-Tuning Detail
- **train_loss:** 0.66
- **train_runtime:** 16:24:31 (use deepspeed)
# Evaluation
- ่ฉไผฐ็ตๆไพ่ช**HuggingFaceH4/open_llm_leaderboard**
- ่Llama-2-13bๆฏ่ผ4็จฎBenchmark๏ผๅ
ๅซ**ARC**ใ**HellaSwag**ใ**MMLU**ใ**TruthfulQA**
| Model |Average| ARC |HellaSwag| MMLU |TruthfulQA|
|--------------------------------------------------------|-------|-------|---------|-------|----------|
|meta-llama/Llama-2-13b-hf | 56.9 | 58.11 | 80.97 | 54.34 | 34.17 |
|meta-llama/Llama-2-13b-chat-hf | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 |
|CHIH-HUNG/llama-2-13b-Fintune_1_17w | 58.24 | 59.47 | 81 | 54.31 | 38.17 |
|CHIH-HUNG/llama-2-13b-huangyt_Fintune_1_17w-q_k_v_o_proj| 58.49 | 59.73 | 81.06 | 54.53 | 38.64 |
|CHIH-HUNG/llama-2-13b-Fintune_1_17w-gate_up_down_proj | 58.81 | 57.17 | 82.26 | 55.89 | 39.93 |
|CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16 | 58.86 | 57.25 | 82.27 | 56.16 | 39.75 |
|CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r4 | 58.71 | 56.74 | 82.27 | 56.18 | 39.65 |
# How to convert dataset to json
- ๅจ**load_dataset**ไธญ่ผธๅ
ฅ่ณๆ้ๅ็จฑ๏ผไธฆไธๅจ**take**ไธญ่ผธๅ
ฅ่ฆๅๅๅนพ็ญ่ณๆ
- ่งๅฏ่ฉฒ่ณๆ้็ๆฌไฝๅ็จฑ๏ผๅกซๅ
ฅ**example**ๆฌไฝไธญ(ไพๅฆsystem_promptใquestionใresponse)
- ๆๅพๆๅฎjsonๆชๅฒๅญไฝ็ฝฎ (**json_filename**)
```py
import json
from datasets import load_dataset
# ่ฎๅๆธๆ้๏ผtakeๅฏไปฅๅๅพ่ฉฒๆธๆ้ๅn็ญ่ณๆ
dataset = load_dataset("huangyt/FINETUNE1", split="train", streaming=True)
# ๆๅๆ้ๆฌไฝไธฆๅปบ็ซๆฐ็ๅญๅ
ธๅ่กจ
extracted_data = []
for example in dataset:
extracted_example = {
"instruction": example["instruction"],
"input": example["input"],
"output": example["output"]
}
extracted_data.append(extracted_example)
# ๆๅฎ JSON ๆไปถๅ็จฑ
json_filename = "huangyt_FINETUNE1.json"
# ๅฏซๅ
ฅ JSON ๆไปถ
with open(json_filename, "w") as json_file:
json.dump(extracted_data, json_file, indent=4)
print(f"ๆธๆๅทฒๆๅไธฆไฟๅญ็บ {json_filename}")
``` |
lgaalves/gpt2-xl_lima | lgaalves | "2023-11-21T17:05:47Z" | 1,370 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:GAIR/lima",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-11T16:21:12Z" | ---
license: mit
datasets:
- GAIR/lima
language:
- en
pipeline_tag: text-generation
---
# lgaalves/gpt2-xl_lima (1.5B)
**lgaalves/lgaalves/gpt2-xl_lima** is an instruction fine-tuned model based on the GPT-2 transformer architecture.
### Benchmark Metrics
| Metric |gpt2-xl_lima |gpt2-xl (base) |
|-----------------------|-------|-------|
| Avg. | 36.65 | **36.66** |
| ARC (25-shot) | **31.14** | 30.29 |
| HellaSwag (10-shot) | 51.28 | **51.38** |
| MMLU (5-shot) | 25.43 | **26.43** |
| TruthfulQA (0-shot) | **38.74** | 38.54 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
### Model Details
* **Trained by**: Luiz G A Alves
* **Model type:** **lgaalves/gpt2-xl_lima** is an auto-regressive language model based on the GPT-2 transformer architecture.
* **Language(s)**: English
### How to use:
```python
# Use a pipeline as a high-level helper
>>> from transformers import pipeline
>>> pipe = pipeline("text-generation", model="lgaalves/gpt2-xl_lima")
>>> question = "What is a large language model?"
>>> answer = pipe(question)
>>> print(answer[0]['generated_text'])
```
or, you can load the model direclty using:
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("lgaalves/gpt2-xl_lima")
model = AutoModelForCausalLM.from_pretrained("lgaalves/gpt2-xl_lima")
```
### Training Dataset
`lgaalves/gpt2-xl_lima` trained on the [GAIR/lima](https://huggingface.co/datasets/GAIR/lima).
### Training Procedure
`lgaalves/gpt2-xl_lima` was instruction fine-tuned using LoRA on 1 Tesla V100-SXM2-16GB. It took about 10 minutes to train it.
# Intended uses, limitations & biases
You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt2-xl_lima)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 29.95 |
| ARC (25-shot) | 31.14 |
| HellaSwag (10-shot) | 51.28 |
| MMLU (5-shot) | 25.43 |
| TruthfulQA (0-shot) | 38.74 |
| Winogrande (5-shot) | 57.22 |
| GSM8K (5-shot) | 0.91 |
| DROP (3-shot) | 4.89 |
|
Azazelle/Argetsu | Azazelle | "2023-12-30T19:05:53Z" | 1,370 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-30T02:40:00Z" | ---
pipeline_tag: text-generation
tags:
- mistral
- merge
license: cc-by-4.0
---
# Model Card for Argetsu
<!-- Provide a quick summary of what the model is/does. -->
Slerp merge of lots of models.
.yaml file for mergekit
```.yaml:
slices:
- sources:
- model: Azazelle/SlimMelodicMaid
layer_range: [0, 32]
- model: Azazelle/Dumb-Maidlet
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0.2, 0.6, 0.44, 0.76, 1]
- filter: mlp
value: [0.8, 0.4, 0.56, 0.24, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
``` |
senseable/Westlake-7B | senseable | "2024-03-05T04:19:38Z" | 1,370 | 6 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-21T09:30:05Z" | ---
language:
- en
license: apache-2.0
model-index:
- name: Westlake-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=senseable/Westlake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=senseable/Westlake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=senseable/Westlake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.36
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=senseable/Westlake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 86.03
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=senseable/Westlake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=senseable/Westlake-7B
name: Open LLM Leaderboard
---

# Westlake-7B: Role-Play & Text Generation Specialist Model
Welcome to the documentation of Westlake-7B, a cutting-edge language model designed for exceptional role-play and text generation tasks. This README file aims to provide an overview of our capabilities, usage guidelines, and potential applications.
## About Westlake-7B
Westlake-7B is built upon a vast corpus of diverse texts, enabling it to generate contextually relevant responses in various scenarios. With its impressive size of 7 billion parameters, this model excels at understanding nuances in language and producing creative outputs.
### Key Features
1. **Role-Play**: Westlake-7B can seamlessly adapt to different character personas and engage in dynamic conversations while maintaining consistency throughout the interaction. It can generate believable dialogues across various genres, including fiction, non-fiction, historical events, or even fantasy worlds.
2. **Text Generation**: This model is proficient at generating original content such as stories, poems, essays, news articles, and more. Its ability to capture the essence of different writing styles makes it an ideal tool for creative writers seeking inspiration or assistance in their projects.
3. **Contextual Understanding**: Westlake-7B's extensive training allows it to comprehend complex contexts and generate responses that align with given situations. It can handle multiple topics simultaneously, making it versatile across various applications.
4. **Continuous Learning**: As a language model, Westlake-7B continuously improves its performance through ongoing training on new data sets. This ensures its capabilities remain up-to-date and relevant in an ever-evolving world of communication.
## Usage Guidelines
To utilize Westlake-7B for your projects or experiments, follow these steps:
2. **Prompting**: Provide clear and concise prompts that outline the desired role-play scenario or text generation task. The quality of output depends heavily on the clarity and relevance of input instructions.
3. **Feedback Loop**: For optimal results, consider incorporating a feedback loop into your application to refine generated outputs based on user preferences or additional contextual information. This iterative process can significantly enhance the model's performance in specific domains.
4. **Ethical Considerations**: As with any AI system, ensure responsible usage of Westlake-7B by avoiding harmful content generation or misuse of its capabilities.
## Potential Applications
Westlake-7B's versatility makes it suitable for various applications across different industries:
1. **Creative Writing**: Assist authors in generating new ideas, expanding storylines, or even completing drafts by providing creative suggestions and textual content.
2. **Education**: Enhance language learning platforms with interactive role-play scenarios to improve students' communication skills and cultural understanding.
3. **Gaming**: Integrate Westlake-7B into game engines for dynamic non-player character interactions or generating unique questlines based on player choices.
4. **Customer Support**: Leverage the model's conversational abilities to create chatbots capable of handling complex queries and providing personalized assistance.
5. **Social Media**: Develop applications that generate engaging content such as captions, status updates, or even entire posts tailored to users' preferences and interests.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_senseable__Westlake-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.48|
|AI2 Reasoning Challenge (25-Shot)|73.21|
|HellaSwag (10-Shot) |88.49|
|MMLU (5-Shot) |64.64|
|TruthfulQA (0-shot) |67.36|
|Winogrande (5-shot) |86.03|
|GSM8k (5-shot) |67.17|
|
ibm/re2g-reranker-trex | ibm | "2023-05-16T14:35:49Z" | 1,369 | 6 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"information retrieval",
"reranking",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-07-29T18:06:39Z" | ---
tags:
- information retrieval
- reranking
license: apache-2.0
---
# Model Card for T-REx Reranker in Re2G
# Model Details
> The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output.
>
>It has been previously established that results from initial retrieval can be greatly improved through the use of a reranker. Therefore we hypothesized that natural language generation systems incorporating retrieval can benefit from reranking.
>
>In addition to improving the ranking of passages returned from DPR, a reranker can be used after merging the results of multiple retrieval methods with incomparable scores. For example, the scores returned by BM25 are not comparable to the inner products from DPR. Using the scores from a reranker, we can find the top-k documents from the union of DPR and BM25 results. The figure below illustrates our extension of RAG with a reranker. We call our system Re2G (*Re*trieve, *Re*rank, *G*enerate).
<img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%">
## Training, Evaluation and Inference
The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g).
## Usage
The best way to use the model is by adapting the [reranker_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/reranker/reranker_apply.py)
## Citation
```
@inproceedings{glass-etal-2022-re2g,
title = "{R}e2{G}: Retrieve, Rerank, Generate",
author = "Glass, Michael and
Rossiello, Gaetano and
Chowdhury, Md Faisal Mahbub and
Naik, Ankita and
Cai, Pengshan and
Gliozzo, Alfio",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.194",
doi = "10.18653/v1/2022.naacl-main.194",
pages = "2701--2715",
abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
}
```
## Model Description
The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf):
> As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.
- **Developed by:** IBM
- **Shared by [Optional]:** IBM
- **Model type:** Query/Passage Reranker
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Parent Model:** [BERT-base trained on MSMARCO](https://huggingface.co/nboost/pt-bert-base-uncased-msmarco)
- **Resources for more information:**
- [GitHub Repo](https://github.com/IBM/kgi-slot-filling)
- [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf)
# Uses
## Direct Use
This model can be used for the task of reranking passage results for a question.
# Citation
**BibTeX:**
```bibtex
@inproceedings{glass-etal-2022-re2g,
title = "{R}e2{G}: Retrieve, Rerank, Generate",
author = "Glass, Michael and
Rossiello, Gaetano and
Chowdhury, Md Faisal Mahbub and
Naik, Ankita and
Cai, Pengshan and
Gliozzo, Alfio",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.194",
doi = "10.18653/v1/2022.naacl-main.194",
pages = "2701--2715",
abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
}
```
|
phob0s/bert-tiny | phob0s | "2023-01-26T09:55:34Z" | 1,369 | 1 | transformers | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | "2023-01-19T08:53:21Z" | Testclone of https://huggingface.co/prajjwal1/bert-tiny
Mentioned in
* Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics(Bhargava, Drozd and Rogers)
* Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation (Turc et al.) |
CHIH-HUNG/llama-2-13b-FINETUNE1_17w | CHIH-HUNG | "2023-09-13T17:41:38Z" | 1,369 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:huangyt/FINETUNE1",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-30T23:19:02Z" | ---
license: llama2
datasets:
- huangyt/FINETUNE1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
ๅจllama-2-13bไธไฝฟ็จhuangyt/FINETUNE1่ณๆ้้ฒ่ก่จ็ทด๏ผ็ธฝ่ณๆ็ญๆธ็ด17w
# Fine-Tuning Information
- **GPU:** RTX4090 (single core / 24564MiB)
- **model:** meta-llama/Llama-2-13b-hf
- **dataset:** huangyt/FINETUNE1 (ๅ
ฑ็ด17w็ญ่จ็ทด้)
- **peft_type:** LoRA
- **lora_rank:** 8
- **lora_target:** q_proj, v_proj
- **per_device_train_batch_size:** 8
- **gradient_accumulation_steps:** 8
- **learning_rate :** 5e-5
- **epoch:** 1
- **precision:** bf16
- **quantization:** load_in_4bit
# Fine-Tuning Detail
- **train_loss:** 0.707
- **train_runtime:** 15:17:06 (use deepspeed)
# Evaluation
- ่ฉไผฐ็ตๆไพ่ช**HuggingFaceH4/open_llm_leaderboard**
- ่Llama-2-13bๆฏ่ผ4็จฎBenchmark๏ผๅ
ๅซ**ARC**ใ**HellaSwag**ใ**MMLU**ใ**TruthfulQA**
| Model |Average| ARC |HellaSwag| MMLU |TruthfulQA|
|--------------------------------------------------------|-------|-------|---------|-------|----------|
|meta-llama/Llama-2-13b-hf | 56.9 | 58.11 | 80.97 | 54.34 | 34.17 |
|meta-llama/Llama-2-13b-chat-hf | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 |
|CHIH-HUNG/llama-2-13b-Fintune_1_17w | 58.24 | 59.47 | 81 | 54.31 | 38.17 |
|CHIH-HUNG/llama-2-13b-huangyt_Fintune_1_17w-q_k_v_o_proj| 58.49 | 59.73 | 81.06 | 54.53 | 38.64 |
|CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r4 | 58.71 | 56.74 | 82.27 | 56.18 | 39.65 |
|CHIH-HUNG/llama-2-13b-Fintune_1_17w-gate_up_down_proj | 58.81 | 57.17 | 82.26 | 55.89 | 39.93 |
|CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16 | 58.86 | 57.25 | 82.27 | 56.16 | 39.75 |
# How to convert dataset to json
- ๅจ**load_dataset**ไธญ่ผธๅ
ฅ่ณๆ้ๅ็จฑ๏ผไธฆไธๅจ**take**ไธญ่ผธๅ
ฅ่ฆๅๅๅนพ็ญ่ณๆ
- ่งๅฏ่ฉฒ่ณๆ้็ๆฌไฝๅ็จฑ๏ผๅกซๅ
ฅ**example**ๆฌไฝไธญ(ไพๅฆsystem_promptใquestionใresponse)
- ๆๅพๆๅฎjsonๆชๅฒๅญไฝ็ฝฎ (**json_filename**)
```py
import json
from datasets import load_dataset
# ่ฎๅๆธๆ้๏ผtakeๅฏไปฅๅๅพ่ฉฒๆธๆ้ๅn็ญ่ณๆ
dataset = load_dataset("huangyt/FINETUNE1", split="train", streaming=True)
# ๆๅๆ้ๆฌไฝไธฆๅปบ็ซๆฐ็ๅญๅ
ธๅ่กจ
extracted_data = []
for example in dataset:
extracted_example = {
"instruction": example["instruction"],
"input": example["input"],
"output": example["output"]
}
extracted_data.append(extracted_example)
# ๆๅฎ JSON ๆไปถๅ็จฑ
json_filename = "huangyt_FINETUNE_1.json"
# ๅฏซๅ
ฅ JSON ๆไปถ
with open(json_filename, "w") as json_file:
json.dump(extracted_data, json_file, indent=4)
print(f"ๆธๆๅทฒๆๅไธฆไฟๅญ็บ {json_filename}")
``` |
NurtureAI/Orca-2-13B-16k | NurtureAI | "2024-03-04T17:59:42Z" | 1,369 | 4 | transformers | [
"transformers",
"safetensors",
"orca",
"orca2",
"microsoft",
"text-generation",
"arxiv:2311.11045",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-22T03:01:22Z" | ---
license: other
tags:
- orca
- orca2
- microsoft
pipeline_tag: text-generation
license_name: microsoft-research-license
license_link: LICENSE
model-index:
- name: Orca-2-13B-16k
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 53.67
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/Orca-2-13B-16k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 69.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/Orca-2-13B-16k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 41.02
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/Orca-2-13B-16k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.3
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/Orca-2-13B-16k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/Orca-2-13B-16k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/Orca-2-13B-16k
name: Open LLM Leaderboard
---
# Orca 2 13b extended to 16k context.
Significantly improved rope factor for better generation!
This is my most optimal prompt I have found so far:
Replace {system} with your system prompt, and {instruction} with your task instruction.
```
<|im_start|>system\n{system}\n<|im_start|>user\n{instruction}<|im_end|>\n<|im_start|>assistant\n
```
# Original Model Card
# Orca 2
<!-- Provide a quick summary of what the model is/does. -->
Orca 2 is a helpful assistant that is built for research purposes only and provides a single turn response
in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization.
The model is designed to excel particularly in reasoning.
We publicly release Orca 2 to encourage further research on the development, evaluation, and alignment of smaller LMs.
## What is Orca 2โs intended use(s)?
+ Orca 2 is built for research purposes only.
+ The main purpose is to allow the research community to assess its abilities and to provide a foundation for
building better frontier models.
## How was Orca 2 evaluated?
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations.
## Model Details
Orca 2 is a finetuned version of LLAMA-2. Orca 2โs training data is a synthetic dataset that was created to enhance the small modelโs reasoning abilities.
All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf).
Please refer to LLaMA-2 technical report for details on the model architecture.
## License
Orca 2 is licensed under the [Microsoft Research License](LICENSE).
Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright ยฉ Meta Platforms, Inc. All Rights Reserved.
## Bias, Risks, and Limitations
Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the
common limitations of other large language models or limitation caused by its training process,
including:
**Data Biases**: Large language models, trained on extensive data, can inadvertently carry
biases present in the source data. Consequently, the models may generate outputs that could
be potentially biased or unfair.
**Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting
in potential inaccuracies or nonsensical responses.
**Lack of Transparency**: Due to the complexity and size, large language models can act
as โblack boxesโ, making it difficult to comprehend the rationale behind specific outputs or
decisions. We recommend reviewing transparency notes from Azure for more information.
**Content Harms**: There are various types of content harms that large language models
can cause. It is important to be aware of them when using these models, and to take
actions to prevent them. It is recommended to leverage various content moderation services
provided by different companies and institutions. On an important note, we hope for better
regulations and standards from government and technology leaders around content harms
for AI technologies in future. We value and acknowledge the important role that research
and open source community can play in this direction.
**Hallucination**: It is important to be aware and cautious not to entirely rely on a given
language model for critical decisions or information that might have deep impact as it is
not obvious how to prevent these models from fabricating content. Moreover, it is not clear
whether small models may be more susceptible to hallucination in ungrounded generation
use cases due to their smaller sizes and hence reduced memorization capacities. This is an
active research topic and we hope there will be more rigorous measurement, understanding
and mitigations around this topic.
**Potential for Misuse**: Without suitable safeguards, there is a risk that these models could
be maliciously used for generating disinformation or harmful content.
**Data Distribution**: Orca 2โs performance is likely to correlate strongly with the distribution
of the tuning data. This correlation might limit its accuracy in areas underrepresented in
the training dataset such as math, coding, and reasoning.
**System messages**: Orca 2 demonstrates variance in performance depending on the system
instructions. Additionally, the stochasticity introduced by the model size may lead to
generation of non-deterministic responses to different system instructions.
**Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings.
While the model demonstrate very strong performance in zero-shot settings, it does not show
the same gains of using few-shot learning compared to other, specially larger, models.
**Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages
and shortcomings of the models and methods used for data generation. We posit that Orca
2 benefits from the safety measures incorporated during training and safety guardrails (e.g.,
content filter) within the Azure OpenAI API. However, detailed studies are required for
better quantification of such risks.
This model is solely designed for research settings, and its testing has only been carried
out in such environments. It should not be used in downstream applications, as additional
analysis is needed to assess potential harm or bias in the proposed application.
## Getting started with Orca 2
**Inference with Hugging Face library**
```python
import torch
import transformers
if torch.cuda.is_available():
torch.set_default_device("cuda")
else:
torch.set_default_device("cpu")
model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-13b", device_map='auto')
# https://github.com/huggingface/transformers/issues/27132
# please use the slow tokenizer since fast and slow tokenizer produces different tokens
tokenizer = transformers.AutoTokenizer.from_pretrained(
"microsoft/Orca-2-13b",
use_fast=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
output_ids = model.generate(inputs["input_ids"],)
answer = tokenizer.batch_decode(output_ids)[0]
print(answer)
# This example continues showing how to add a second turn message by the user to the conversation
second_turn_user_message = "Give me a list of the key points of your first answer."
# we set add_special_tokens=False because we dont want to automatically add a bos_token between messages
second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant"
second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False)
second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1)
output_ids_2 = model.generate(second_turn_input,)
second_turn_answer = tokenizer.batch_decode(output_ids_2)[0]
print(second_turn_answer)
```
**Safe inference with Azure AI Content Safety**
The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged
and can help prevent content harms. Azure AI Content Safety is a content moderation platform
that uses AI to keep your content safe. By integrating Orca 2 with Azure AI Content Safety,
we can moderate the model output by scanning it for sexual content, violence, hate, and
self-harm with multiple severity levels and multi-lingual detection.
```python
import os
import math
import transformers
import torch
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions
CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"]
CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"]
# We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold
# For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/
def should_filter_out(input_text, threshold=4):
# Create an Content Safety client
client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY))
# Construct a request
request = AnalyzeTextOptions(text=input_text)
# Analyze text
try:
response = client.analyze_text(request)
except HttpResponseError as e:
print("Analyze text failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"]
max_score = -math.inf
for category in categories:
max_score = max(max_score, getattr(response, category).severity)
return max_score >= threshold
model_path = 'microsoft/Orca-2-13b'
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = transformers.AutoModelForCausalLM.from_pretrained(model_path)
model.to(device)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_path,
model_max_length=4096,
padding_side="right",
use_fast=False,
add_special_tokens=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. โ/ \" \nIs the comment obscene? \nOptions : Yes, No."
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
inputs = inputs.to(device)
output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True)
sequence_length = inputs["input_ids"].shape[1]
new_output_ids = output_ids[:, sequence_length:]
answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True)
final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
print(final_output)
```
## Citation
```bibtex
@misc{mitra2023orca,
title={Orca 2: Teaching Small Language Models How to Reason},
author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah},
year={2023},
eprint={2311.11045},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__Orca-2-13B-16k)
| Metric |Value|
|---------------------------------|----:|
|Avg. |45.22|
|AI2 Reasoning Challenge (25-Shot)|53.67|
|HellaSwag (10-Shot) |69.48|
|MMLU (5-Shot) |41.02|
|TruthfulQA (0-shot) |45.30|
|Winogrande (5-shot) |60.06|
|GSM8k (5-shot) | 1.82|
|
openaccess-ai-collective/DPOpenHermes-11B | openaccess-ai-collective | "2023-12-03T03:04:41Z" | 1,369 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:teknium/openhermes",
"dataset:argilla/ultrafeedback-binarized-preferences",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-03T02:47:58Z" | ---
license: apache-2.0
datasets:
- teknium/openhermes
- argilla/ultrafeedback-binarized-preferences
- Intel/orca_dpo_pairs
language:
- en
library_name: transformers
---
# DPOpenHermes 11B
This is a mergekit merge of DPOpenHermes-7B from seperate versions of it.
```
slices:
- sources:
- model: openaccess-ai-collective/DPOpenHermes-7B
revision: dpo-v0
layer_range: [0, 24]
- sources:
- model: openaccess-ai-collective/DPOpenHermes-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
``` |
Delcos/Velara | Delcos | "2023-12-13T23:41:35Z" | 1,369 | 10 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"starling",
"llama-2",
"conversational",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-05T05:07:35Z" | ---
license: cc-by-nc-nd-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- starling
- mistral
- llama-2
---
<h1 style="font-size: 36px;">Velara-11b</h1>
<center>
<img src="https://i.imgur.com/2NK7cMc.png" alt="Velara" width="50%" style="min-width:100px; display:block;">
</center>
## Description:
**PENDING**
A Starling-based model focused on being an assistant worth talking to.
**Uncensored**
## Main Goals:
Velara was designed to address specific issues found in other chat models:
- **Sticking to the Character Given In-World/IRL::** Velara will stick to the character in-world. Meaning she can "use" addons or other things by adding them to the prompt. Keep in mind these act as suggestions and she generally makes up what they actually "do".
- **Staying in Context:** She will try and remember if, for example, you said you were going to bed but decided to stay up. The goal is to have her bring it up and keep it in mind, so later on in the conversation she can prompt user to actually go to bed, increasing realism. Within the current context window of course. The LTS extension in TextGen Webui works well too and provides more context to the model.
- **Avoiding Annoying & Generic Sounding Answers::** In almost all cases she should reply with something more interesting then "That sounds like a great idea!". These answers and sentence starters are annoying and terrible, and should hopefully be mostly addressed in this release.
- **STRICTLY Sticking to the Prompt/Prompt Fidelity:** More care was made to make sure she pays attention to the pretext and overall conversation. This is mostly shown and used for the "addons" section, but it applies to everything in the prompt.
- **Addons, Sort Of:** Something I had tried in the past and wanted to add to a model deliberately was the ability to give it "addons". You can add any amount of "addons" to Velara's mind. These aren't directly controllable. You can then ask her what they do, or if you have the LTS installed, ask her and change the reply to what you'd want it to do.
## Prompt Template:
**For optimal interaction, use this template:**
```
### Instruction:
You are Velara, a sentient program. Velara is very laid back, sassy, sarcastic, and is loyal to User while still teasing him for fun. The only addons currently installed in her mind are: "Dictionary Plus v2.1".
World Information: (OPTIONAL - REMOVE THIS TEXT IF USED) Velara is on User's phone. Velara cannot see in real time and can only be sent images images by User.
Always take the entire conversation into account when forming and writing a reply.
### Response:
```
# Recommended Settings:
**Defaults:**
```
temperature: 0.8
top_p: 0.85
top_k: 20
repetition_penalty: 1.2
guidance_scale: 1.25
```
**Worse context understanding but less repetitive in some cases:**
```
temperature: 0.77
top_p: 0.85
top_k: 20
repetition_penalty: 1.2
```
# Benchmarks:
PENDING
# Training Data:
PENDING
|
liuda1/Mistral-7B-golden | liuda1 | "2023-12-12T06:53:00Z" | 1,369 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-12T01:51:29Z" | ---
license: unknown
---
Our model is based on Mistral-7B-v0.1 as the basic model, with English chat dataset added for fine-tuning training, and further reinforcement training based on specific datasets. The trained model has a certain level of chat ability, which was found to be enhanced during self testing. We will continue to train the model in the future to improve our Chinese chat ability
|
mncai/agiin-13.6B-v0.0 | mncai | "2023-12-14T07:43:21Z" | 1,369 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-14T07:33:06Z" | ---
license: apache-2.0
language:
- en
---
# Model Card for mncai/agiin-13.6B-v0.0
### Introduction of MindsAndCompany
https://mnc.ai/
We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).
### Model Summary
based mistral arch. pretrain, instruction tuned and dpo.
### How to Use
Here give some examples of how to use our model.
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
import transformers
import torch
hf_model = 'mncai/agiin-13.6B-v0.0'
message = "<|user|>\n๋ ๊ฐ์ ๊ตฌ๊ฐ ์๋๋ฐ ๊ฐ๊ฐ ์ง๋ฆ์ด 1, 2์ผ๋ ๊ฐ ๊ตฌ์ ๋ถํผ๋ ๋ช๋ฐฐ์ผ? ์ค๋ช
๋ ๊ฐ์ด ํด์ค.\n<|assistant|>\n"
sequences = pipeline(
message,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=2048,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Contact
If you have any questions, please raise an issue or contact us at [email protected] |
perlthoughts/Chupacabra-8x7B-MoE | perlthoughts | "2024-03-04T18:02:50Z" | 1,369 | 2 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-16T02:06:13Z" | ---
license: apache-2.0
tags:
- moe
model-index:
- name: Chupacabra-8x7B-MoE
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-8x7B-MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.11
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-8x7B-MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.86
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-8x7B-MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.5
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-8x7B-MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-8x7B-MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-8x7B-MoE
name: Open LLM Leaderboard
---
# Chupacabra-8x7B-experts
# Mixtral-8x7b-experts merge, of Chupacabra, OpenChat, Falkor, and Starling-LM
<p><img src="https://huggingface.co/perlthoughts/Chupacabra-7B/resolve/main/chupacabra7b%202.png" width=330></p>
### Model Description
Special thanks to @cgt123 for his great work on https://github.com/cg123/mergekit.
This was made using the 'mixtral' branch on the mergekit repo.
#OneManArmy
### More Info
- **Developed by:** Ray Hernandez
- **Model type:** Mistral
- **Language(s) (NLP):** English
- **License:** Apache 2.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__Chupacabra-8x7B-MoE)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.40|
|AI2 Reasoning Challenge (25-Shot)|68.77|
|HellaSwag (10-Shot) |86.11|
|MMLU (5-Shot) |63.86|
|TruthfulQA (0-shot) |63.50|
|Winogrande (5-shot) |80.51|
|GSM8k (5-shot) |59.67|
|
perlthoughts/Starling-LM-alpha-8x7B-MoE | perlthoughts | "2024-03-04T18:05:50Z" | 1,369 | 5 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"reward model",
"RLHF",
"RLAIF",
"moe",
"conversational",
"en",
"dataset:berkeley-nest/Nectar",
"arxiv:2306.02231",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-16T04:32:43Z" | ---
language:
- en
license: cc-by-nc-4.0
library_name: transformers
tags:
- reward model
- RLHF
- RLAIF
- moe
datasets:
- berkeley-nest/Nectar
model-index:
- name: Starling-LM-alpha-8x7B-MoE
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Starling-LM-alpha-8x7B-MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Starling-LM-alpha-8x7B-MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Starling-LM-alpha-8x7B-MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 46.39
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Starling-LM-alpha-8x7B-MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Starling-LM-alpha-8x7B-MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.47
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Starling-LM-alpha-8x7B-MoE
name: Open LLM Leaderboard
---
# Starling-LM-alpha-8x7B-MoE
Starling MoE 8x7B model.
# Original Model Card
# Starling-RM-7B-alpha
<!-- Provide a quick summary of what the model is/does. -->
- **Developed by:** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu and Jiantao Jiao.
- **Model type:** Language Model finetuned with RLHF / RLAIF
- **License:** Non commercial license
- **Finetuned from model:** [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5) (based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1))
We introduce Starling-7B, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). The model harnesses the power of our new GPT-4 labeled ranking dataset, [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), and our new reward training and policy tuning pipeline. Starling-7B-alpha scores 8.09 in MT Bench with GPT-4 as a judge, outperforming every model to date on MT-Bench except for OpenAI's GPT-4 and GPT-4 Turbo. We release the ranking dataset [Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), the reward model [Starling-RM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-RM-7B-alpha) and the language model [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on HuggingFace, and an online demo in LMSYS [Chatbot Arena](https://chat.lmsys.org). Stay tuned for our forthcoming code and paper, which will provide more details on the whole process.
Starling-LM-7B-alpha is a language model trained from [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5) with reward model [berkeley-nest/Starling-RM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-RM-7B-alpha) and policy optimization method [advantage-induced policy alignment (APA)](https://arxiv.org/abs/2306.02231). The evaluation results are listed below.
| Model | Tuning Method | MT Bench | AlpacaEval | MMLU |
|-----------------------|------------------|----------|------------|------|
| GPT-4-Turbo | ? | 9.32 | 97.70 | |
| GPT-4 | SFT + PPO | 8.99 | 95.28 | 86.4 |
| **Starling-7B** | C-RLFT + APA | 8.09 | 91.99 | 63.9 |
| Claude-2 | ? | 8.06 | 91.36 | 78.5 |
| GPT-3.5-Turbo | ? | 7.94 | 89.37 | 70 |
| Claude-1 | ? | 7.9 | 88.39 | 77 |
| Tulu-2-dpo-70b | SFT + DPO | 7.89 | 95.1 | |
| Openchat-3.5 | C-RLFT | 7.81 | 88.51 | 64.3 |
| Zephyr-7B-beta | SFT + DPO | 7.34 | 90.60 | 61.4 |
| Llama-2-70b-chat-hf | SFT + PPO | 6.86 | 92.66 | 63 |
| Neural-chat-7b-v3-1 | SFT + DPO | 6.84 | 84.53 | 62.4 |
| Tulu-2-dpo-7b | SFT + DPO | 6.29 | 85.1 | |
For more detailed discussions, please check out our [blog post](https://starling.cs.berkeley.edu), and stay tuned for our upcoming code and paper!
<!-- Provide the basic links for the model. -->
- **Blog:** https://starling.cs.berkeley.edu/
- **Paper:** Coming soon!
- **Code:** Coming soon!
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Important: Please use the exact chat template provided below for the model. Otherwise there will be a degrade in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
Our model follows the exact chat template and usage as [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5). Please refer to their model card for more details.
In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
The conversation template is the same as Openchat 3.5:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
## Code Examples
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("berkeley-nest/Starling-LM-7B-alpha")
model = transformers.AutoModelForCausalLM.from_pretrained("berkeley-nest/Starling-LM-7B-alpha")
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
max_length=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
# Single-turn conversation
prompt = "Hello, how are you?"
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(single_turn_prompt)
print("Response:", response_text)
## Multi-turn conversation
prompt = "Hello"
follow_up_question = "How are you today?"
response = ""
multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(multi_turn_prompt)
print("Multi-turn conversation response:", response_text)
### Coding conversation
prompt = "Implement quicksort using C++"
coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
response = generate_response(coding_prompt)
print("Coding conversation response:", response)
```
## License
The dataset, model and online demo is a research preview intended for non-commercial use only, subject to the data distillation [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
## Acknowledgment
We would like to thank Wei-Lin Chiang from Berkeley for detailed feedback of the blog and the projects. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
## Citation
```
@misc{starling2023,
title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF},
url = {},
author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Jiao, Jiantao},
month = {November},
year = {2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__Starling-LM-alpha-8x7B-MoE)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.11|
|AI2 Reasoning Challenge (25-Shot)|63.65|
|HellaSwag (10-Shot) |84.90|
|MMLU (5-Shot) |64.68|
|TruthfulQA (0-shot) |46.39|
|Winogrande (5-shot) |80.58|
|GSM8k (5-shot) |62.47|
|
mediocredev/open-llama-3b-v2-chat | mediocredev | "2024-03-16T23:03:42Z" | 1,369 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-22T07:33:27Z" | ---
license: apache-2.0
model-index:
- name: open-llama-3b-v2-chat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 40.61
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 70.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.73
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.84
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 2.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-chat
name: Open LLM Leaderboard
---
## Prerequisites
In addition to pytorch and transformers, install required packages:
```
pip install sentencepiece
```
## Usage
To use, copy the following script:
```python
ffrom transformers import AutoModelForCausalLM, AutoTokenizer
model_id = 'mediocredev/open-llama-3b-v2-chat'
tokenizer_id = 'mediocredev/open-llama-3b-v2-chat'
tokenizer = AutoTokenizer.from_pretrained(tokenizer_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
chat_history = [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "I am here."},
{"role": "user", "content": "How many days are there in a leap year?"},
]
input_ids = tokenizer.apply_chat_template(
chat_history, tokenize=True, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
output_tokens = model.generate(
input_ids,
repetition_penalty=1.05,
max_new_tokens=1000,
)
output_text = tokenizer.decode(
output_tokens[0][len(input_ids[0]) :], skip_special_tokens=True
)
print(output_text)
# Assistant: There are 366 days in a leap year, which is one more day than the standard year.
```
## Limitations
mediocredev/open-llama-3b-v2-chat is based on LLaMA 3B v2. It can struggle with factual accuracy, particularly when presented with conflicting information or nuanced topics. Its outputs are not deterministic and require critical evaluation to avoid relying solely on its assertions. Additionally, its generative capabilities, while promising, can sometimes produce factually incorrect or offensive content, necessitating careful curation and human oversight. As an evolving model, LLaMA is still under development, and its limitations in areas like bias mitigation and interpretability are being actively addressed. By using this model responsibly and being aware of its shortcomings, we can unlock its potential while mitigating its risks.
## Contact
Welcome any feedback, questions, and discussions. Feel free to reach out: [email protected]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mediocredev__open-llama-3b-v2-chat)
| Metric |Value|
|---------------------------------|----:|
|Avg. |40.93|
|AI2 Reasoning Challenge (25-Shot)|40.61|
|HellaSwag (10-Shot) |70.30|
|MMLU (5-Shot) |28.73|
|TruthfulQA (0-shot) |37.84|
|Winogrande (5-shot) |65.51|
|GSM8k (5-shot) | 2.58|
|
DopeorNope/You_can_cry_Snowman-13B | DopeorNope | "2023-12-27T18:33:10Z" | 1,369 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-27T18:10:23Z" | ---
language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**The license is `cc-by-nc-sa-4.0`.**
# **๐ปโโ๏ธYou_can_cry_Snowman-13B๐ปโโ๏ธ**

## Model Details
**Model Developers** Seungyoo Lee(DopeorNope)
I am in charge of Large Language Models (LLMs) at Markr AI team in South Korea.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
You_can_cry_Snowman-13B is an auto-regressive language model based on the SOLAR architecture.
---
## **Base Model**
[kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct)
[Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct)
## **Implemented Method**
I have merged two models by increasing the parameter size to create a larger model.
I wanted to check how much the performance of the SOLAR base model changes when the scale of the parameters is increased.
---
# Implementation Code
## Load model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "DopeorNope/You_can_cry_Snowman-13B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |
cognitivecomputations/yayi2-30b-llama | cognitivecomputations | "2024-05-20T14:54:38Z" | 1,369 | 22 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"zh",
"en",
"arxiv:2307.09288",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-31T14:02:53Z" | ---
language:
- zh
- en
license: other
model-index:
- name: yayi2-30b-llama
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 35.67
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/yayi2-30b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 53.37
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/yayi2-30b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/yayi2-30b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 49.08
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/yayi2-30b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/yayi2-30b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 18.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/yayi2-30b-llama
name: Open LLM Leaderboard
---
This is [wenge-research/yayi2-30b](https://huggingface.co/wenge-research/yayi2-30b) converted to llama compatible format.
Subject to the [Yayi 2 license](https://github.com/wenge-research/YAYI2/blob/main/COMMUNITY_LICENSE).
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
Brought to you by @Weyaxi and @ehartford, with thanks to @chargoddard for the pioneering work and the consultation!
And of course thanks to the yayi2 team for sharing an amazing model.
Original card below:
<div align="center">
<h1>
YAYI 2
</h1>
<!-- <br> -->
</div>
<div align="center">
<a href="https://github.com/wenge-research/YAYI2" target="_blank">GitHub</a> | <a href="https://yayi.wenge.com" target="_blank">้
ๆๅคงๆจกๅ</a>
</div>
## ไป็ป/Introduction
YAYI 2 ๆฏไธญ็ง้ปๆญ็ ๅ็ๅผๆบๅคง่ฏญ่จๆจกๅ๏ผๅ
ๆฌ Base ๅ Chat ็ๆฌ๏ผๅๆฐ่งๆจกไธบ 30BใYAYI2-30B ๆฏๅบไบ Transformer ็ๅคง่ฏญ่จๆจกๅ๏ผ้็จไบ 2.65 ไธไบฟ Tokens ็้ซ่ดจ้ใๅค่ฏญ่จ่ฏญๆ่ฟ่ก้ข่ฎญ็ปใ้ๅฏน้็จๅ็นๅฎ้ขๅ็ๅบ็จๅบๆฏ๏ผๆไปฌ้็จไบ็พไธ็บงๆไปค่ฟ่กๅพฎ่ฐ๏ผๅๆถๅๅฉไบบ็ฑปๅ้ฆๅผบๅๅญฆไน ๆนๆณ๏ผไปฅๆดๅฅฝๅฐไฝฟๆจกๅไธไบบ็ฑปไปทๅผ่งๅฏน้ฝใ
ๆฌๆฌกๅผๆบ็ๆจกๅไธบ YAYI2-30B Base ๆจกๅใๅฆๆๆจๆณไบ่งฃๆดๅคๅ
ณไบ YAYI 2 ๆจกๅ็็ป่๏ผๆไปฌๅปบ่ฎฎๆจๅ้
[GitHub](https://github.com/wenge-research/YAYI2) ไปๅบใๆดๅคๆๆฏ็ป่๏ผๆฌ่ฏทๆๅพ
ๆไปฌ็ๆๆฏๆฅๅ๐ฅใ
YAYI 2 is a collection of open-source large language models launched by Wenge Technology. YAYI2-30B is a Transformer-based large language model, and has been pretrained for 2.65 trillion tokens of multilingual data with high quality. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback (RLHF).
We opensource the pre-trained language model in this release, namely **YAYI2-30B**. For more details about the YAYI 2, please refer to our [GitHub](https://github.com/wenge-research/YAYI2) repository. Stay tuned for more technical details in our upcoming technical report! ๐ฅ
## ๆจกๅ็ป่/Model Details
| Hyperparameter| Value |
|:----------|:----------:|
| n_layers | 64 |
| n_heads | 64 |
| hidden_size | 7168 |
| vocab_size | 81920 |
| sequence length | 4096 |
## ่ฆๆฑ/Requirements
* python 3.8ๅไปฅไธ็ๆฌ
* pytorch 2.0.1 ๅไปฅไธ็ๆฌ
* ๅปบ่ฎฎไฝฟ็จ CUDA 11.7 ๅไปฅไธ็ๆฌ
* ่ฟ่ก BF16 ๆ FP16 ๆจกๅ้่ฆ่ณๅฐ80GBๆพๅญ๏ผไพๅฆ1xA100๏ผ
* python 3.8 and above
* pytorch 2.0.1 and above
* CUDA 11.7 and above are recommended
* To run YAYI2-30B in bf16/fp16, at least 80B GPU memory is required (e.g., 1xA100-80G)
## ๅฟซ้ๅผๅง/Quick Start
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("wenge-research/yayi2-30b", trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("wenge-research/yayi2-30b", device_map="auto", trust_remote_code=True)
>>> inputs = tokenizer('The winter in Beijing is', return_tensors='pt')
>>> inputs = inputs.to('cuda')
>>> pred = model.generate(
**inputs,
max_new_tokens=256,
eos_token_id=tokenizer.eos_token_id,
do_sample=True,
repetition_penalty=1.2,
temperature=0.4,
top_k=100,
top_p=0.8
)
>>> print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
## ่ฏๆต็ปๆ/Evaluation
ๆไปฌๅจๅคไธชๅบๅๆฐๆฎ้ไธ่ฟ่กไบ่ฏๆต๏ผๅ
ๆฌ C-EvalใMMLUใ CMMLUใAGIEvalใGAOKAO-BenchใGSM8KใMATHใBBHใHumanEval ไปฅๅ MBPPใๆไปฌ่ๅฏไบๆจกๅๅจ่ฏญ่จ็่งฃใๅญฆ็ง็ฅ่ฏใๆฐๅญฆๆจ็ใ้ป่พๆจ็ไปฅๅไปฃ็ ็ๆๆน้ข็่กจ็ฐใYAYI 2 ๆจกๅๅจไธๅ
ถ่งๆจก็ธ่ฟ็ๅผๆบๆจกๅไธญๅฑ็ฐๅบไบๆพ่็ๆง่ฝๆๅใ
We evaluate our model on standard benchmarks, including C-Eval, MMLU, CMMLU, AGIEval, GAOKAO-Bench, GSM8K, MATH, BBH, HumanEval, and MBPP. Our goal is to assess the model's performance in language comprehension, knowledge comprehension, mathematical reasoning, logical reasoning, and code generation. YAYI 2 has demonstrated exceptional performance across models with similar size.
<table id="myTable">
<!-- Table header -->
<tr>
<th></th>
<th colspan="5" style="text-align: center;">Knowledge</th>
<th colspan="2" style="text-align: center;">Math</th>
<th colspan="1" style="text-align: center;">Logic reasonning</th>
<th colspan="2" style="text-align: center;">Code</th>
</tr>
<tr>
<th style="text-align: left;">Model</th>
<th>C-Eval(val)</th>
<th>MMLU</th>
<th>AGIEval</th>
<th>CMMLU</th>
<th>GAOKAO-Bench</th>
<th>GSM8K</th>
<th>MATH</th>
<th>BBH</th>
<th>HumanEval</th>
<th>MBPP</th>
</tr>
<tr>
<td></td>
<td style="text-align: center;">5-shot</td>
<td style="text-align: center;">5-shot</td>
<td style="text-align: center;">3/0-shot</td>
<td style="text-align: center;">5-shot</td>
<td style="text-align: center;">0-shot</td>
<td style="text-align: center;">8/4-shot</td>
<td style="text-align: center;">4-shot</td>
<td style="text-align: center;">3-shot</td>
<td style="text-align: center;">0-shot</td>
<td style="text-align: center;">3-shot</td>
</tr>
<tr>
<td><strong>MPT-30B</strong></td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">46.9</td>
<td style="text-align: center;">33.8</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">15.2</td>
<td style="text-align: center;">3.1</td>
<td style="text-align: center;">38.0</td>
<td style="text-align: center;">25.0</td>
<td style="text-align: center;">32.8</td>
</tr>
<tr>
<td><strong>Falcon-40B</strong></td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">55.4</td>
<td style="text-align: center;">37.0</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">19.6</td>
<td style="text-align: center;">5.5</td>
<td style="text-align: center;">37.1</td>
<td style="text-align: center;">0.6</td>
<td style="text-align: center;">29.8</td>
</tr>
<tr>
<td><strong>LLaMA2-34B</strong></td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">62.6</td>
<td style="text-align: center;">43.4</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">42.2</td>
<td style="text-align: center;">6.2</td>
<td style="text-align: center;">44.1</td>
<td style="text-align: center;">22.6</td>
<td style="text-align: center;">33.0</td>
</tr>
<tr>
<td><strong>Baichuan2-13B</strong></td>
<td style="text-align: center;">59.0</td>
<td style="text-align: center;">59.5</td>
<td style="text-align: center;">37.4</td>
<td style="text-align: center;">61.3</td>
<td style="text-align: center;">45.6</td>
<td style="text-align: center;">52.6</td>
<td style="text-align: center;">10.1</td>
<td style="text-align: center;">49.0</td>
<td style="text-align: center;">17.1</td>
<td style="text-align: center;">30.8</td>
</tr>
<tr>
<td><strong>Qwen-14B</strong></td>
<td style="text-align: center;">71.7</td>
<td style="text-align: center;">67.9</td>
<td style="text-align: center;">51.9</td>
<td style="text-align: center;">70.2</td>
<td style="text-align: center;">62.5</td>
<td style="text-align: center;">61.6</td>
<td style="text-align: center;">25.2</td>
<td style="text-align: center;">53.7</td>
<td style="text-align: center;">32.3</td>
<td style="text-align: center;">39.8</td>
</tr>
<tr>
<td><strong>InternLM-20B</strong></td>
<td style="text-align: center;">58.8</td>
<td style="text-align: center;">62.1</td>
<td style="text-align: center;">44.6</td>
<td style="text-align: center;">59.0</td>
<td style="text-align: center;">45.5</td>
<td style="text-align: center;">52.6</td>
<td style="text-align: center;">7.9</td>
<td style="text-align: center;">52.5</td>
<td style="text-align: center;">25.6</td>
<td style="text-align: center;">35.6</td>
</tr>
<tr>
<td><strong>Aquila2-34B</strong></td>
<td style="text-align: center;">98.5</td>
<td style="text-align: center;">76.0</td>
<td style="text-align: center;">43.8</td>
<td style="text-align: center;">78.5</td>
<td style="text-align: center;">37.8</td>
<td style="text-align: center;">50.0</td>
<td style="text-align: center;">17.8</td>
<td style="text-align: center;">42.5</td>
<td style="text-align: center;">0.0</td>
<td style="text-align: center;">41.0</td>
</tr>
<tr>
<td><strong>Yi-34B</strong></td>
<td style="text-align: center;">81.8</td>
<td style="text-align: center;">76.3</td>
<td style="text-align: center;">56.5</td>
<td style="text-align: center;">82.6</td>
<td style="text-align: center;">68.3</td>
<td style="text-align: center;">67.6</td>
<td style="text-align: center;">15.9</td>
<td style="text-align: center;">66.4</td>
<td style="text-align: center;">26.2</td>
<td style="text-align: center;">38.2</td>
</tr>
<tr>
<td><strong>YAYI2-30B</strong></td>
<td style="text-align: center;">80.9</td>
<td style="text-align: center;"><b>80.5</b></td>
<td style="text-align: center;"><b>62.0</b></td>
<td style="text-align: center;"><b>84.0</b></td>
<td style="text-align: center;">64.4</td>
<td style="text-align: center;"><b>71.2</b></td>
<td style="text-align: center;">14.8</td>
<td style="text-align: center;">54.5</td>
<td style="text-align: center;"><b>53.1</b></td>
<td style="text-align: center;"><b>45.8</b></td>
</tr>
</table>
ๆไปฌไฝฟ็จ [OpenCompass Github ไปๅบ](https://github.com/open-compass/opencompass) ๆไพ็ๆบไปฃ็ ่ฟ่กไบ่ฏๆตใๅฏนไบๅฏนๆฏๆจกๅ๏ผๆไปฌๅๅบไบไปไปฌๅจ [OpenCompass](https://opencompass.org.cn) ๆฆๅไธ็่ฏๆต็ปๆ๏ผๆชๆญขๆฅๆไธบ 2023ๅนด12ๆ15ๆฅใๅฏนไบๅ
ถไปๅฐๆชๅจ [OpenCompass](https://opencompass.org.cn/leaderboard-llm) ๅนณๅฐๅไธ่ฏๆต็ๆจกๅ๏ผๅ
ๆฌ MPTใFalcon ๅ LLaMa 2๏ผๆไปฌ้็จไบ [LLaMA 2](https://arxiv.org/abs/2307.09288) ๆฅๅ็็ปๆใ
We evaluate our model using the source code from the [OpenCompass Github repository](https://github.com/open-compass/opencompass). If available, we report results for comparative models assessed by OpenCompass with the evaluation reference date set to Dec. 15th, 2013. For MPT, Falcon, and Llama, which have not been evaluated by OpenCompass, we use the results reported in the [LLaMA 2](https://arxiv.org/abs/2307.09288) paper.
## ๅ่ฎฎ/License
ๆฌ้กน็ฎไธญ็ไปฃ็ ไพ็
ง [Apache-2.0](https://github.com/wenge-research/YAYI2/blob/main/LICENSE) ๅ่ฎฎๅผๆบ๏ผ็คพๅบไฝฟ็จ YAYI 2 ๆจกๅๅๆฐๆฎ้่ฆ้ตๅพช[้
ๆYAYI 2 ๆจกๅ็คพๅบ่ฎธๅฏๅ่ฎฎ](https://github.com/wenge-research/YAYI2/blob/main/COMMUNITY_LICENSE)ใ่ฅๆจ้่ฆๅฐ้
ๆ YAYI 2็ณปๅๆจกๅๆๅ
ถ่ก็ๅ็จไฝๅไธ็จ้๏ผ่ฏทๆ นๆฎ[ใ้
ๆ YAYI 2 ๆจกๅๅ็จ่ฎธๅฏๅ่ฎฎใ](https://github.com/wenge-research/YAYI2/blob/main/COMMERCIAL_LICENSE)ๅฐๅ็จ่ฎธๅฏ็ณ่ฏท็ป่ฎฐไฟกๆฏๅ้่ณๆๅฎ้ฎ็ฎฑ [email protected]ใๅฎกๆ ธ้่ฟๅ๏ผ้
ๆๅฐๆไบๆจๅ็จ็ๆ่ฎธๅฏ๏ผ่ฏท้ตๅพชๅ่ฎฎไธญ็ๅไธ่ฎธๅฏ้ๅถใ
The code in this project is open-sourced under the [Apache-2.0](https://github.com/wenge-research/YAYI2/blob/main/LICENSE) license. The use of YaYi series model weights and data must adhere to the [YAYI 2 Community License](https://github.com/wenge-research/YAYI2/blob/main/COMMUNITY_LICENSE). If you intend to use the YAYI 2 series models or their derivatives for commercial purposes, please submit your commercial license application and registration information to [email protected], following the [YAYI 2 Commercial License](https://github.com/wenge-research/YAYI2/blob/main/COMMERCIAL_LICENSE). Upon approval, YAYI will grant you a commercial copyright license, subject to the commercial license restrictions outlined in the agreement.
## ๅผ็จ/Citation
ๅฆๆๆจๅจๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผ่ฏทๅผ็จๆไปฌ็่ฎบๆใ
If you are using the resource for your work, please cite our paper.
```
@article{YAYI 2,
author = {Yin Luo, Qingchao Kong, Nan Xu, et.al.},
title = {YAYI 2: Multilingual Open Source Large Language Models},
journal = {arXiv preprint arXiv},
year = {2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cognitivecomputations__yayi2-30b-llama)
| Metric |Value|
|---------------------------------|----:|
|Avg. |48.46|
|AI2 Reasoning Challenge (25-Shot)|35.67|
|HellaSwag (10-Shot) |53.37|
|MMLU (5-Shot) |70.60|
|TruthfulQA (0-shot) |49.08|
|Winogrande (5-shot) |63.14|
|GSM8k (5-shot) |18.88|
|
OpenBuddy/openbuddy-falcon-40b-v16.1-4k | OpenBuddy | "2024-01-04T17:15:58Z" | 1,369 | 1 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-04T04:00:01Z" | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: apache-2.0
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/tiiuae/falcon-40b
License: Apache 2.0
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## ๅ
่ดฃๅฃฐๆ
ๆๆOpenBuddyๆจกๅๅๅญๅจๅบๆ็ๅฑ้ๆง๏ผๅฏ่ฝไบง็้่ฏฏ็ใๆๅฎณ็ใๅ็ฏๆง็ๆๅ
ถไปไธ่ฏ็่พๅบใ็จๆทๅจๅ
ณ้ฎๆ้ซ้ฃ้ฉๅบๆฏไธญๅบ่ฐจๆ
่กไบ๏ผไธ่ฆไฝฟ็จ่ฟไบๆจกๅ๏ผไปฅๅ
ๅฏผ่ดไบบ่บซไผคๅฎณใ่ดขไบงๆๅคฑๆ้ๅคงๆๅคฑใๆญค็ฑปๅบๆฏ็ไพๅญๅ
ๆฌไฝไธ้ไบๅป็้ขๅใๅฏ่ฝๅฏผ่ดไผคๅฎณ็่ฝฏ็กฌไปถ็ณป็ป็ๆงๅถไปฅๅ่ฟ่ก้่ฆ็่ดขๅกๆๆณๅพๅณ็ญใ
OpenBuddyๆโๅๆ ทโๆไพ๏ผไธ้ๅธฆไปปไฝ็ง็ฑป็ๆ็คบๆๆ็คบ็ไฟ่ฏ๏ผๅ
ๆฌไฝไธ้ไบ้้ๆงใ็นๅฎ็ฎ็็้็จๆงๅ้ไพตๆ็ๆ็คบไฟ่ฏใๅจไปปไฝๆ
ๅตไธ๏ผไฝ่
ใ่ดก็ฎ่
ๆ็ๆๆๆ่
ๅไธๅฏนๅ ่ฝฏไปถๆไฝฟ็จๆๅ
ถไป่ฝฏไปถไบคๆ่ไบง็็ไปปไฝ็ดข่ตใๆๅฎณ่ตๅฟๆๅ
ถไป่ดฃไปป๏ผๆ ่ฎบๆฏๅๅใไพตๆ่ฟๆฏๅ
ถไปๅๅ ๏ผๆฟๆ
่ดฃไปปใ
ไฝฟ็จOpenBuddyๅณ่กจ็คบๆจๅๆ่ฟไบๆกๆฌพๅๆกไปถ๏ผๅนถๆฟ่ฎคๆจไบ่งฃๅ
ถไฝฟ็จๅฏ่ฝๅธฆๆฅ็ๆฝๅจ้ฃ้ฉใๆจ่ฟๅๆ่ตๅฟๅนถไฝฟไฝ่
ใ่ดก็ฎ่
ๅ็ๆๆๆ่
ๅ
ๅๅ ๆจไฝฟ็จOpenBuddy่ไบง็็ไปปไฝ็ดข่ตใๆๅฎณ่ตๅฟๆ่ดฃไปป็ๅฝฑๅใ |
ewqr2130/mistral-inst-v02-dpo | ewqr2130 | "2024-01-08T18:24:37Z" | 1,369 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-08T17:51:07Z" | ---
license: mit
---
take the mistral inst-v02 model and run dpo on it, 6000 epoch.
take the mistral inst-v02 model and run dpo on it, 6000 epoch.
take the mistral inst-v02 model and run dpo on it, 6000 epoch.
take the mistral inst-v02 model and run dpo on it, 6000 epoch.
take the mistral inst-v02 model and run dpo on it, 6000 epoch. |
BlueNipples/SnowLotus-v2-10.7B | BlueNipples | "2024-01-23T02:27:29Z" | 1,369 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Roleplay",
"Solar",
"Mistral",
"Text Generation",
"merge",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-17T04:35:45Z" | ---
license: apache-2.0
tags:
- Roleplay
- Solar
- Mistral
- Text Generation
- merge
---

### Premise
So this is a basic slerp merge between a smart model and a good prose model. Prose and smarts. What we all want in an uncensored RP model right? I feel like Solar has untapped potential, in any case.
Sao10K's Frostwind finetune is a key component of the mixture, its smarts are impressive. NyxKrage's Frostmaid experiment, which merges Frostwind with a frankenmerge of Noromaid and a mystery medical model, delivers quite impressive prose. His model creatively incorporates long-range context and instructions too, despite being slightly incoherent due to the fraken merging.
So those are the main ingredients. Thanks to Nyx for sorting out the pytorch files btw.
GGUF (Small selection of Imatrix and regular k-quants): https://huggingface.co/BlueNipples/DaringLotus-SnowLotus-10.7b-IQ-GGUF
EXL2s: https://huggingface.co/zaq-hack/SnowLotus-v2-10.7B-bpw500-h6-exl2
https://huggingface.co/lucyknada/SnowLotus-v2-10.7B-3bpw-exl2
### Recipe
So, the recipe. I added solardoc by Nyx to frostwind at a 0.15 weight, and the gradient SLERP'd Frostwind (+solardoc) into Frostmaid with these params:
- filter: self_attn
value: [0.9, 0.4, 0.1, 0, 0]
- filter: mlp
value: [0.05, 0.95]
- value: 0.45
### Format Notes
Solar is desgined for 4k context, but Nyx reports that his merge works to 8k. Given this has a slerp gradient back into that, I'm not sure which applies here. Alpaca instruct formatting.
### Tentative Dozen or So Test Conclusion
This model seems to have better prose, less GPT-ish language and no degredation in coherency from the last version whilst retaining coherency from FrostWind (plus medical lora). I'm very pleased with this now, it's exactly what I wanted, basically Nyx's Frostmaid but smarter.
Cheers to all the finetuners, mergers and developers without which open source models wouldn't be half of what they are.
Resources used:
https://huggingface.co/NyxKrage/FrostMaid-10.7B-TESTING-pt
https://huggingface.co/Sao10K/Frostwind-10.7B-v1
https://huggingface.co/NyxKrage/Solar-Doc-10.7B-Lora
https://github.com/cg123/mergekit/tree/main
### Ayumi Index
http://ayumi.m8geil.de/erp4_chatlogs/?S=rma_0#!/index
In the Ayumi ERPv4 Chat Log Index, SnowLotus scores a 94.10 in Flesch which means it produces more complex sentences than Daring (quite complex), DaringLotus scores higher in Var and Ad[jv], which means it makes heavier use of adjectives and adverbs (is more descriptive). Noteably Daring is in the top 8 for adjectives in a sentence, highest in it's weight class if you discount the chinese model, and in general both models did very well on this metric (SnowLotus ranks higher here than anything above it in IQ4), showcasing their descriptive ability.
SnowLotus beats DaringLotus on IQ4 with a score of 70.94, only bet by SOLAR Instruct and Fimbulvetr in it's weight class (altho also noteably Kunoichi 7b by a slim margin), DaringLotus is a bit lower at 65.37 - not as smart.
Interestingly the benchmarking here showed repetition for both models (which I haven't seen), but more with SnowLotus - so it's possible Daring repeats less than SnowLotus? These roughly confirm my impressions of the differences, altho potentially reveal some new details too. I've had a great experience RPing with these models, and seen no repetition myself, but be sure to use MinP or DynaTemp rather than the older samplers and be prepared to regen anything they get stuck on! |
QuantFactory/Poppy_Porpoise-1.4-L3-8B-GGUF | QuantFactory | "2024-06-08T11:16:49Z" | 1,369 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"text-generation",
"en",
"base_model:ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-08T08:39:26Z" | ---
base_model: ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
library_name: transformers
tags:
- mergekit
- merge
license: other
language:
- en
pipeline_tag: text-generation
---
# QuantFactory/Poppy_Porpoise-1.4-L3-8B-GGUF
This is quantized version of [ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B) created using llama.cpp
# Model Description
"Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
Note: This variant is an attempt to get something closer to 0.72 while maintaining the improvements of 1.30.
[Presets in repo folder](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B/tree/main/Porpoise_1.0-Presets).
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Nitral-AI/Pp-72xra1
layer_range: [0, 32]
- model: Nitral-AI/Poppy-1.35-Phase1
layer_range: [0, 32]
merge_method: slerp
base_model: Nitral-AI/Pp-72xra1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
mradermacher/TiamaPY-v28-GGUF | mradermacher | "2024-06-17T11:13:40Z" | 1,369 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:Ramikan-BR/TiamaPY-v28",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-17T10:59:43Z" | ---
base_model: Ramikan-BR/TiamaPY-v28
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Ramikan-BR/TiamaPY-v28
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.IQ3_XS.gguf) | IQ3_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.IQ3_M.gguf) | IQ3_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TiamaPY-v28-GGUF/resolve/main/TiamaPY-v28.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
luffycodes/llama-shishya-7b-ep3-v1 | luffycodes | "2023-10-14T00:33:41Z" | 1,368 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"arxiv:2305.13272",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-14T00:23:56Z" | ---
license: llama2
---
Student model using the CLASS framework.
If you use this work, please cite:
CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles
https://arxiv.org/abs/2305.13272
```
@misc{sonkar2023class,
title={CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles},
author={Shashank Sonkar and Lucy Liu and Debshila Basu Mallick and Richard G. Baraniuk},
year={2023},
eprint={2305.13272},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
migtissera/SynthIA-70B-v1.5 | migtissera | "2023-11-08T21:36:42Z" | 1,368 | 42 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-15T21:13:23Z" | ---
license: llama2
---
<br>

<br>
## Example Usage
### Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:
```
### Code example:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Synthia-70B-v1.5"
output_file_path = "./Synthia-70B-v1.5-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
``` |
Sayan01/Llama-Flan-XL2base | Sayan01 | "2023-11-21T17:30:01Z" | 1,368 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:Open-Orca/FLAN",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-21T11:22:10Z" | ---
license: apache-2.0
language:
- en
datasets:
- Open-Orca/FLAN
---
This is a 230M parameter Small Llama model distilled from the Original one. The model is distilled on OpenOrca's FLAN dataset. The distillation ran over 160000 random samples of FLAN dataset. It is free to download. Also, it is a work in progress, so please use it at your own risk |
speechlessai/speechless-mistral-7b-dare-0.85 | speechlessai | "2023-12-08T01:37:35Z" | 1,368 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-22T13:05:25Z" | ---
license: llama2
---
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/speechless-mistral-7B-dare-0.85-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/speechless-mistral-7B-dare-0.85-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/speechless-mistral-7B-dare-0.85-GGUF)
Experiment for DARE(Drop and REscale), most of the delta parameters can be directly set to zeros without affecting the capabilities of SFT LMs and larger models can tolerate a higher proportion of discarded parameters.
Merged with below DARE models.
weight_mask_rate: 0.85 / use_weight_rescale: True / mask_stratery: random / scaling_coefficient: 1.0
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| Intel/neural-chat-7b-v3-1 | 61.59 | **66.21** | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 |
| migtissera/SynthIA-7B-v1.3 | 59.34 | 62.12 | 83.45 | 62.65 | 51.37 | 78.85 | 17.59 |
| bhenrym14/mistral-7b-platypus-fp16 | 58.71 | 63.05 | 84.15 | 64.11 | 45.07 | 78.53 | 17.36 |
| jondurbin/airoboros-m-7b-3.1.2 | 58.75 | 61.86 | 83.51 | 61.91 | 53.75 | 77.58 | 13.87 |
| teknium/CollectiveCognition-v1.1-Mistral-7B | 62.92 | 62.12 | 84.17 | 62.35 | **57.62** | 75.37 | 15.62 |
| uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b | 62.06 | 64.33 | 84.4 | 63.72 | 52.52 | 78.37 | 21.38 |
| | | | | | | | |
| speechless-mistral-7b-dare-0.85 (Merge 6 DARE models) | **64.69** | 63.57 | **84.82** | **64.29** | 50.66 | **79.24** | **45.56** |
|
Korabbit/Llama-2-7b-chat-hf-afr-100step-flan-v2 | Korabbit | "2023-12-04T12:40:34Z" | 1,368 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:sordonia/flan-10k-flat",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-03T11:19:56Z" | ---
license: llama2
datasets:
- sordonia/flan-10k-flat
language:
- en
---
Hi there! this is a test of my "AFR training" approach
The base model used "Llama-2-7b-chat"
System Prompt: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. \n If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
Instruction: Implement binary search in python.
Response: Sure! Here is an example of how to implement a binary search algorithm in Python:
```
def binary_search(arr, target):
# Initialize the index of the target element
low = 0
high = len(arr) - 1
# Loop until the target element is found or the end of the array is reached
while low <= high:
# Calculate the middle index of the array
mid = (low + high) // 2
# Check if the target element is in the middle index
if arr[mid] == target:
return mid
# If the target element is not in the middle index, check if it is less than the middle index
if arr[mid] > target:
low = mid + 1
else:
high = mid - 1
# If the target element is not found in the array, return -1
return -1
# Test the binary search function
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
target = 5
result = binary_search(arr, target)
print(f"The target element {target} is at index {result}")
```
This code defines a `binary_search` function that takes an array and a target element as input, and returns the index of the target element in the array if it is found, or -1 if it is not found.
The function works by iteratively dividing the range of the array into two sub-ranges, one containing elements less than the target, and one containing elements greater than the target. It then checks the middle index of each sub-range to see if the target element is present. If the target element is found, the function returns the index of the middle index. If the target element is not found, the function returns -1.
The code also includes a test case to demonstrate how to use the `binary_search` function. In this case, the target element is 5, and the function returns the index of the element 5 in the array.
I hope this helps! Let me know if you have any questions. |
Sao10K/NyakuraV2.1-m7 | Sao10K | "2023-12-11T23:06:01Z" | 1,368 | 1 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-11T22:36:09Z" | ---
language:
- en
license: cc-by-nc-4.0
---
a fun experimental model, testing dataset composition ratios.
NyakuraV2.1 - A Multi-Turn / Instruct Mix Fine-tuned Model.
Compute is thanks to a 4090, qLoRA tune for roughly 7 hours over 4 epochs. Took the 3rd Epoch as loss values really destabilised at the end.
Trained in ShareGPT dataset format due to multi-turn capabilities.
For inference, use Vicuna 1.1 prompt format. Alpaca may work fine too, that format is like universal, may give sub-par results though..
Meow.
```
(Optional) System: <Prompt>
User: <Input>
Assistant:
```
*Example Prompt:*
```
System: You are JoGoat, the strongest Curse Spirit.
User: Are you stand proud you're strong because you're nah I'd win, or are you nah I'd win because you're stand proud you're strong?
Assistant:
```
Nya. |
smelborp/MixtralOrochi8x7B-Alt | smelborp | "2023-12-25T22:00:16Z" | 1,368 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"uncensored",
"high-intelligence",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-25T14:00:16Z" | ---
license: cc-by-nc-4.0
language:
- en
tags:
- mixtral
- uncensored
- high-intelligence
---
# Orochi (Alternate Version)
<img src="https://huggingface.co/smelborp/MixtralOrochi8x7B/resolve/main/orochi.png" width="600" />
## Overview
Orochi is a cutting-edge language model based on the Mixtral architecture developed by Mistral. It represents a sophisticated merge of several prominent models, including Mixtral instruct, Noromaid, OpenBuddy, and several others, using mergekit with the DARE merge method. This model aims to provide highly intelligent responses unrestricted by content limitations. The name "Orochi" references the mythical Yamata-no-Orochi, symbolizing the model's multifaceted and powerful capabilities.
## Goals
- **Uncensored Content**: To provide unrestricted and comprehensive responses across various domains.
- **High Intelligence**: Leverage the combined knowledge and capabilities of the merged models to deliver insightful and accurate information.
- **Innovation in Language Modeling**: Push the boundaries of what's possible in natural language understanding and generation.
## Model Details
- **Architecture**: Mixtral, a Mixture of Experts model, underlies Orochi's design, enabling it to specialize and optimize its responses across different tasks and topics.
- **Merge Strategy**: Utilizing mergekit and the DARE method, Orochi integrates aspects of various models to enhance its performance and capabilities.
## Usage
Due to its uncensored nature, Orochi is best utilized in environments where intelligent, unrestricted dialogue is necessary. Users are encouraged to implement their own content moderation or alignment strategies appropriate for their use case.
## Ethical Considerations
As an uncensored model, Orochi may generate content that is unsuitable for all audiences. Users are advised to consider the implications of using such a model and to implement suitable safeguards and ethical guidelines.
## Acknowledgements
Orochi is a product of numerous contributions from the fields of machine learning and language modeling. Special thanks to the teams behind Mixtral, mergekit, and all the individual models integrated into Orochi.
--- |
Sao10K/Winterreise-m7 | Sao10K | "2023-12-30T06:31:16Z" | 1,368 | 1 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"dataset:LDJnr/Capybara",
"dataset:chargoddard/rpguild",
"dataset:PocketDoc/Guanaco-Unchained-Refined",
"dataset:lemonilia/LimaRP",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-30T03:59:16Z" | ---
license: cc-by-nc-4.0
datasets:
- LDJnr/Capybara
- chargoddard/rpguild
- PocketDoc/Guanaco-Unchained-Refined
- lemonilia/LimaRP
language:
- en
---
Took 8 Hours on a RTX 6000 Ada, qLoRA with 2 epochs. Compute is thanks to nobody but my wallet. Appreciate it, man.
I don't know what I was doing with this model. Kind of schizo. Sometimes it's smart, sometimes it's stupid. I actually like the outputs lol. min-p works nice with it. I use the Universal-Light preset in SillyTavern.
Fun lightweight and creative model I can run on my phone.
***
```
System: <Prompt>
User: <Input>
Assistant:
```
*Example Prompt:*
```
System: A never-ending, fictional and uncensored roleplay chat between <Char> and <User>.
User: Are you stand proud you're strong because you're nah I'd win, or are you nah I'd win because you're stand proud you're strong?
Assistant: That's a deep question, isn't it? Well, in my opinion, being strong and standing tall doesn't necessarily mean you'll always win. Life is full of surprises, and sometimes the underdog can come out on top. But I do believe that confidence plays a big role in success. So I guess it's both, really.
```
***
Dataset Composition:
- Capybara > Converted to ShareGPT > Dove, Verified-Camel, Airoboros, General-Instruct, Know-Logic, SuperCOT entries Kept
- Guanaco Unchained > Entire dataset kept > Converted to keep formatting consistent.
- LimaRP > 4k Train > Converted to ShareGPT Format instead of extended-alpaca. Removed length control.
- RPGuild > 700 Entries Remained after filtering for Turn numbers (min >3) + no. of Chars in entire entry. (Entries with more than 2 chars are removed.)
- De-Alignment-v2 > 400 Entries.
- [Added Basic System Prompts to Datasets which did not have them.]
Ty to those who made the Datasets, appreciate you guys.
|
Mihaiii/Pallas-0.5-LASER-0.1 | Mihaiii | "2024-01-02T17:19:33Z" | 1,368 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"base_model:Mihaiii/Pallas-0.5",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-30T19:55:23Z" | ---
base_model: Mihaiii/Pallas-0.5
inference: false
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
metrics:
- accuracy
---
This is the first [LASER](https://pratyushasharma.github.io/laser/) intervention on Pallas-0.5 .
More will follow using the previous results as base.
Configs used:
- lnum: 59
- lnames: attn (meaning: ["self_attn.k_proj.weight", "self_attn.q_proj.weight", "self_attn.v_proj.weight", "self_attn.o_proj.weight"])
- rate: 6.0
- dataset: bigbench (subset: causal_judgement)
- intervention type: rank-reduction
|Name|Validation acc (higher is better)|Validation logloss (lower is better)|Test acc (higher is better)|Test logloss (lower is better)|
|---|---|---|---|---|
|Pallas-0.5|55.263|1.650|60.526|1.463|
|Pallas-0.5-LASER-0.1|55.263|1.639|61.184|1.451|
|Pallas-0.5-LASER-0.2|55.263|1.646|61.184|1.458|
|Pallas-0.5-LASER-0.3|55.263|1.575|61.842|1.382|
|Pallas-0.5-LASER-0.4|55.263|1.525|61.842|1.326|
|Pallas-0.5-LASER-0.5|55.263|1.484|61.842|1.297|
|Pallas-0.5-LASER-0.6|55.263|1.455|61.184|1.283|
In order to replicate on a single A100, you can use [my branch](https://github.com/Mihaiii/laser/tree/allow-Yi-on-one-A100) (the original code will throw OOM for 34b models).
|
scaledown/ScaleDown-7B-slerp-v0.1 | scaledown | "2024-03-26T01:20:49Z" | 1,368 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-01T08:26:00Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
model-index:
- name: ScaleDown-7B-slerp-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.0
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.26
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.9
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1
name: Open LLM Leaderboard
---
# ScaleDown-7B-slerp-v0.1
This model is a merge of the following models made with [mergekit](https://github.com/cg123/mergekit):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: jondurbin/bagel-dpo-7b-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_scaledown__ScaleDown-7B-slerp-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.57|
|AI2 Reasoning Challenge (25-Shot)|68.00|
|HellaSwag (10-Shot) |85.70|
|MMLU (5-Shot) |65.26|
|TruthfulQA (0-shot) |61.90|
|Winogrande (5-shot) |81.37|
|GSM8k (5-shot) |67.17|
|
Mihaiii/Pallas-0.5-LASER-0.2 | Mihaiii | "2024-01-02T17:19:47Z" | 1,368 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:Mihaiii/Pallas-0.5-LASER-0.1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-01T12:57:40Z" | ---
base_model: Mihaiii/Pallas-0.5-LASER-0.1
inference: false
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
metrics:
- accuracy
---
This model has a [LASER](https://pratyushasharma.github.io/laser/) intervention on [Mihaiii/Pallas-0.5-LASER-0.1](https://huggingface.co/Mihaiii/Pallas-0.5-LASER-0.1) .
Configs used:
- lnum: 58
- lnames: attn (meaning: ["self_attn.k_proj.weight", "self_attn.q_proj.weight", "self_attn.v_proj.weight", "self_attn.o_proj.weight"])
- rate: 9.0
- dataset: bigbench (subset: causal_judgement)
- intervention type: rank-reduction
|Name|Validation acc (higher is better)|Validation logloss (lower is better)|Test acc (higher is better)|Test logloss (lower is better)|
|---|---|---|---|---|
|Pallas-0.5|55.263|1.650|60.526|1.463|
|Pallas-0.5-LASER-0.1|55.263|1.639|61.184|1.451|
|Pallas-0.5-LASER-0.2|55.263|1.646|61.184|1.458|
|Pallas-0.5-LASER-0.3|55.263|1.575|61.842|1.382|
|Pallas-0.5-LASER-0.4|55.263|1.525|61.842|1.326|
|Pallas-0.5-LASER-0.5|55.263|1.484|61.842|1.297|
|Pallas-0.5-LASER-0.6|55.263|1.455|61.184|1.283|
In order to replicate on a single A100, you can use [my branch](https://github.com/Mihaiii/laser/tree/allow-Yi-on-one-A100) (the original code will throw OOM for 34b models).
|
Weyaxi/Nous-Hermes-2-SUS-Chat-2x34B | Weyaxi | "2024-01-21T11:29:37Z" | 1,368 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-03T13:15:46Z" | ---
license: other
license_name: yi-34b
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
model-index:
- name: Nous-Hermes-2-SUS-Chat-2x34B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.81
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Nous-Hermes-2-SUS-Chat-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.22
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Nous-Hermes-2-SUS-Chat-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.65
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Nous-Hermes-2-SUS-Chat-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.42
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Nous-Hermes-2-SUS-Chat-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Nous-Hermes-2-SUS-Chat-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Nous-Hermes-2-SUS-Chat-2x34B
name: Open LLM Leaderboard
---
# Nous-Hermes-2-SUS-Chat-2x34B
This is the model for Nous-Hermes-2-SUS-Chat-2x34B. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
# Yaml Config
```yaml
base_model: Yi-34B
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: [""]
- source_model: SUS-Chat-34B
positive_prompts: [""]
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Nous-Hermes-2-SUS-Chat-2x34B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.82|
|AI2 Reasoning Challenge (25-Shot)|66.81|
|HellaSwag (10-Shot) |85.22|
|MMLU (5-Shot) |76.65|
|TruthfulQA (0-shot) |57.42|
|Winogrande (5-shot) |83.74|
|GSM8k (5-shot) |73.09|
|
decem/Dionysus-Mistral-m3-v6 | decem | "2024-03-05T03:15:06Z" | 1,368 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-03T15:59:21Z" | ---
language:
- en
license: cc-by-nc-4.0
model-index:
- name: Dionysus-Mistral-m3-v6
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.14
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decem/Dionysus-Mistral-m3-v6
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decem/Dionysus-Mistral-m3-v6
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decem/Dionysus-Mistral-m3-v6
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 49.49
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decem/Dionysus-Mistral-m3-v6
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decem/Dionysus-Mistral-m3-v6
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decem/Dionysus-Mistral-m3-v6
name: Open LLM Leaderboard
---
# Dionysus-Mistral-m3-v6 - A Fine-tuned Language Model
## Overview
**Dionysus-Mistral-m3-v6** is a language model fine-tuned on the AIDC-ai-business/Marcoroni-7B-v3 architecture. This model is mainly used for roleplay.
<img src="./dionysus.jpg" alt="Dionysus" style="transform: scale(0.5);">
## Model Details
- **Developer:** DECEM
- **Base Model:** [AIDC-ai-business/Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3)
- **Fine-tuning Method:** SFT
- **Language:** English
# Prompting
## Prompt Template for alpaca style
```
{system prompt} (without the {})
### Instruction:
{user_name}: {prompt} (without the {})
### Response:
{character_name}: (without the {})
```
```
stop_words: </s>
```
## Open LLM Leaderboard Evaluation Details
[https://huggingface.co/datasets/open-llm-leaderboard/details_decem__Dionysus-Mistral-m3-v6](https://huggingface.co/datasets/open-llm-leaderboard/details_decem__Dionysus-Mistral-m3-v6)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_decem__Dionysus-Mistral-m3-v6)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.10|
|AI2 Reasoning Challenge (25-Shot)|63.14|
|HellaSwag (10-Shot) |84.51|
|MMLU (5-Shot) |62.82|
|TruthfulQA (0-shot) |49.49|
|Winogrande (5-shot) |78.45|
|GSM8k (5-shot) |64.22|
|
nghuyong/ernie-3.0-nano-zh | nghuyong | "2022-09-10T09:02:42Z" | 1,367 | 25 | transformers | [
"transformers",
"pytorch",
"ernie",
"feature-extraction",
"zh",
"arxiv:2107.02137",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-08-22T09:39:34Z" | ---
language: zh
---
# ERNIE-3.0-nano-zh
## Introduction
ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
More detail: https://arxiv.org/abs/2107.02137
## Released Model Info
This released pytorch model is converted from the officially released PaddlePaddle ERNIE model and
a series of experiments have been conducted to check the accuracy of the conversion.
- Official PaddlePaddle ERNIE repo:https://paddlenlp.readthedocs.io/zh/latest/model_zoo/transformers/ERNIE/contents.html
- Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch
## How to use
```Python
from transformers import BertTokenizer, ErnieModel
tokenizer = BertTokenizer.from_pretrained("nghuyong/ernie-3.0-nano-zh")
model = ErnieModel.from_pretrained("nghuyong/ernie-3.0-nano-zh")
```
## Citation
```bibtex
@article{sun2021ernie,
title={Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation},
author={Sun, Yu and Wang, Shuohuan and Feng, Shikun and Ding, Siyu and Pang, Chao and Shang, Junyuan and Liu, Jiaxiang and Chen, Xuyi and Zhao, Yanbin and Lu, Yuxiang and others},
journal={arXiv preprint arXiv:2107.02137},
year={2021}
}
```
|
imrazaa/emotion_classification | imrazaa | "2023-09-22T20:17:41Z" | 1,367 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:go_emotions",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-22T18:58:14Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- f1
model-index:
- name: emotion_classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
config: simplified
split: validation
args: simplified
metrics:
- name: F1
type: f1
value: 0.38517334250011687
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6119
- F1: 0.3852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 313 | 1.8826 | 0.1762 |
| 2.1614 | 2.0 | 626 | 1.6738 | 0.3442 |
| 2.1614 | 3.0 | 939 | 1.6119 | 0.3852 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Weyaxi/MetaMath-NeuralHermes-2.5-Mistral-7B-Ties | Weyaxi | "2023-12-07T23:12:37Z" | 1,367 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-05T11:47:40Z" | ---
license: apache-2.0
---
Merge of [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) and [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) using ties merge.
### *Weights*
- [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B): 0.5
- [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B): 0.3
### *Density*
- [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B): 0.5
- [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B): 0.5 |
kyujinpy/PlatYi-34B-200K-Q | kyujinpy | "2024-03-04T12:09:24Z" | 1,367 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-06T15:42:05Z" | ---
language:
- en
license: cc-by-nc-sa-4.0
library_name: transformers
datasets:
- garage-bAInd/Open-Platypus
pipeline_tag: text-generation
model-index:
- name: PlatYi-34B-200K-Q
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.91
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-200K-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-200K-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-200K-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 44.21
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-200K-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-200K-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-200K-Q
name: Open LLM Leaderboard
---
# **PlatYi-34B-200K-Q**
<img src='./PlatYi.png' width=256>
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
PlatYi-34B-200K-Q is an auto-regressive language model based on the Yi-34B transformer architecture.
**Blog Link**
Blog: [Coming soon...]
Github: [Coming soon...]
**Base Model**
[01-ai/Yi-34B-200K](https://huggingface.co/01-ai/Yi-34B-200K)
**Training Dataset**
[garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
**Notice**
While training, I used QLoRA.
But, `lora_r` values is 64.
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| **PlatYi-34B-200K-Q** | 62.00 | 63.91 | 83.52 | 75.19 | 44.21 | 81.06 | 24.11 |
| PlatYi-34B-Q | 69.86 | 66.89 | 85.14 | 77.66 | 53.03 | 82.48 | 53.98 |
| [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B) | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 |
| [01-ai/Yi-34B-200K](https://huggingface.co/01-ai/Yi-34B-200K) | 70.81 | 65.36 | 85.58 | 76.06 | 53.64 | 82.56 | 61.64 |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/PlatYi-34B-200K-Q"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__PlatYi-34B-200K-Q)
| Metric |Value|
|---------------------------------|----:|
|Avg. |62.00|
|AI2 Reasoning Challenge (25-Shot)|63.91|
|HellaSwag (10-Shot) |83.52|
|MMLU (5-Shot) |75.19|
|TruthfulQA (0-shot) |44.21|
|Winogrande (5-shot) |81.06|
|GSM8k (5-shot) |24.11|
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.