modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
osanseviero/mistral-instruct-slerp | osanseviero | 2024-01-10T16:37:14Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-10T16:22:44Z | ---
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
- mistralai/Mistral-7B-Instruct-v0.2
tags:
- mergekit
- merge
license: apache-2.0
---
# slerp
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.1
layer_range: [0, 32]
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
invalid-coder/SOLAR-10.7B-Instruct-SOLARC-M-10.7B-slerp | invalid-coder | 2024-01-10T19:55:52Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"upstage/SOLAR-10.7B-Instruct-v1.0",
"DopeorNope/SOLARC-M-10.7B",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-10T19:52:05Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- upstage/SOLAR-10.7B-Instruct-v1.0
- DopeorNope/SOLARC-M-10.7B
---
# SOLAR-10.7B-Instruct-SOLARC-M-10.7B-slerp
SOLAR-10.7B-Instruct-SOLARC-M-10.7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
* [DopeorNope/SOLARC-M-10.7B](https://huggingface.co/DopeorNope/SOLARC-M-10.7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: upstage/SOLAR-10.7B-Instruct-v1.0
layer_range: [0, 48]
- model: DopeorNope/SOLARC-M-10.7B
layer_range: [0, 48]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "invalid-coder/SOLAR-10.7B-Instruct-SOLARC-M-10.7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
CultriX/MergeTrix-7B | CultriX | 2024-01-27T11:40:48Z | 730 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"abideen/NexoNimbus-7B",
"fblgit/UNA-TheBeagle-7b-v1",
"argilla/distilabeled-Marcoro14-7B-slerp",
"base_model:udkai/Turdus",
"base_model:abideen/NexoNimbus-7B",
"base_model:fblgit/UNA-TheBeagle-7b-v1",
"base_model:argilla/distilabeled-Marcoro14-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-15T16:09:25Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- abideen/NexoNimbus-7B
- fblgit/UNA-TheBeagle-7b-v1
- argilla/distilabeled-Marcoro14-7B-slerp
base_model:
- udkai/Turdus
- abideen/NexoNimbus-7B
- fblgit/UNA-TheBeagle-7b-v1
- argilla/distilabeled-Marcoro14-7B-slerp
---
# EDIT:
Always check my space for the latest benchmark results for my models!
* https://huggingface.co/spaces/CultriX/Yet_Another_LLM_Leaderboard
# IMPORTANT NOTE | READ ME! #
This model uses udkai/Turdus which may produce inaccurate results for the Winogrande evaluation scores.
The following is a quote directly taken from that models page:
- "A less contaminated version of udkai/Garrulus and the second model to be discussed in the paper Subtle DPO-Contamination with modified Winogrande increases TruthfulQA, Hellaswag & ARC."
- "Subtle DPO-Contamination with modified Winogrande causes the average accuracy of all 5-non Winogrande metrics (e.g. including also MMLU and GSM8K) to be 0.2% higher than the underlying model."
In my understanding the Winogrande scores are only slightly influenced by the DPO-Contamination, that has the "side-effect" of increasing the scores on the other benchmarks.
Since the effect on the Winogrande scores was subtle in the udkai/Turdus benchmarking results, and this model combines it with other models (probably making this effect even less pronounced),
I still believe that this model can be of value to the community as it's overall performance is quite impressive.
However I do not want to mislead anybody or produce any unfair scores, hence this note! The full training configuration is also fully transparant and can be found below.
I Hope this model will prove useful to somebody. There's GGUF versions available here for inference: https://huggingface.co/CultriX/MergeTrix-7B-GGUF.
I personally tested them and found them to produce very pleasing results.
Kind regards,
CultriX
# PERSONAL DISCLAIMER
(This is probably a good moment to point out that I'm an amateur doing this for fun and am by no means an IT professional or data scientist.
Therefore my understanding of these topics might be incomplete, missing or simply completely wrong in turn causing me to make inaccurate claims.
If you notice that's the case I invite you to notify me of my mistakes so that I can rectify any potential inaccuracies as soon as possible. Thanks for understanding!)
I Hope this model will prove useful to somebody.
There's GGUF versions available here for inference: https://huggingface.co/CultriX/MergeTrix-7B-GGUF
# Shoutout
Once again, a major thank you and shoutout to @mlabonne for his amazing article that I used to produce this result which can be found here: https://towardsdatascience.com/merge-large-language-models-with-mergekit-2118fb392b54
My other model, CultriX/MistralTrix-v1, was based on another great article from the same guy, which can be found here: https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
(I hope he doesn't mind me using his own articles to beat him on the LeaderBoards for the second time this week... Like last time, all credit should be directed at him really!)
es to beat him on the LeaderBoards for the second time this week... Like last time, all credit should be directed at him really!)
# MODEL INFORMATION:
# NAME: MergeTrix-7B
MergeTrix-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [abideen/NexoNimbus-7B](https://huggingface.co/abideen/NexoNimbus-7B)
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
* [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: udkai/Turdus
# No parameters necessary for base model
- model: abideen/NexoNimbus-7B
parameters:
density: 0.53
weight: 0.4
- model: fblgit/UNA-TheBeagle-7b-v1
parameters:
density: 0.53
weight: 0.3
- model: argilla/distilabeled-Marcoro14-7B-slerp
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: udkai/Turdus
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/MergeTrix-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ibivibiv/orthorus-125b-moe | ibivibiv | 2024-03-04T23:44:37Z | 730 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"en",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-17T04:55:10Z | ---
language:
- en
license: llama2
tags:
- moe
model-index:
- name: orthorus-125b-moe
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.66
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/orthorus-125b-moe
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/orthorus-125b-moe
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/orthorus-125b-moe
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.27
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/orthorus-125b-moe
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/orthorus-125b-moe
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/orthorus-125b-moe
name: Open LLM Leaderboard
---

This is a test run for a future 70b parameter models moe model. I took WizardLM/WizardLM-70B-V1.0 and migtissera/Synthia-70B as two base models and created the discriminator prompts to push technical, logic, and math type questions to the Wizard side and then all creative or conversation questions to the Synthia side. Now that this is working for me I am going to move to fine tuning models for more specific tasks. This model takes about 240GB of VRAM for full resolution inference. As far as I know, it is the first 125B parameter moe model publicly available. I plan on making more and sharing of course.
Hopefully I can add more info on this model, it loads perfectly for me and responds nicely. It might take me a bit since I want to make "Cerberus" with the fine tuned models and get it released. But enjoy this one, llama2 model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ibivibiv__orthorus-125b-moe)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.58|
|AI2 Reasoning Challenge (25-Shot)|67.66|
|HellaSwag (10-Shot) |85.52|
|MMLU (5-Shot) |68.94|
|TruthfulQA (0-shot) |56.27|
|Winogrande (5-shot) |82.32|
|GSM8k (5-shot) |56.79|
|
adamo1139/yi-34b-200k-rawrr-dpo-2 | adamo1139 | 2024-05-27T21:30:18Z | 730 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T01:54:40Z | ---
license: apache-2.0
---
Anti-refusal anti-instruct capabilities of this model are much stronger than yi-34b-200k-rawrr-dpo-1.
This model is Yi-34B-200K fine-tuned using DPO on rawrr_v1 dataset using QLoRA at ctx 500, lora_r 16 and lora_alpha 16. I then applied the adapter to base model. This model is akin to raw LLaMa 65B, it's not meant to follow instructions but instead should be useful as base for further fine-tuning.
Rawrr_v1 dataset made it so that this model issue less refusals, especially for benign topics, and is moreso completion focused rather than instruct focused. Base yi-34B-200k suffers from contamination on instruct and refusal datasets, i am attempting to fix that by training base models with DPO on rawrr dataset, making them more raw.
You should be able to achieve good 0ctx uncensoredness and quite good lack of gptslop if you finetune this model for instruct. |
Stopwolf/Cerberus-7B-slerp | Stopwolf | 2024-03-04T12:43:30Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"fblgit/UNA-TheBeagle-7b-v1",
"UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T09:03:46Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- fblgit/UNA-TheBeagle-7b-v1
- UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3
model-index:
- name: Cerberus-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/Cerberus-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/Cerberus-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.25
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/Cerberus-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.35
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/Cerberus-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/Cerberus-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 17.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Stopwolf/Cerberus-7B-slerp
name: Open LLM Leaderboard
---
# Cerberus-7B-slerp
Cerberus-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
* [UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3](https://huggingface.co/UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: fblgit/UNA-TheBeagle-7b-v1
layer_range: [0, 32]
- model: UCLA-AGI/zephyr-7b-sft-full-SPIN-iter3
layer_range: [0, 32]
merge_method: slerp
base_model: fblgit/UNA-TheBeagle-7b-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Stopwolf__Cerberus-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.46|
|AI2 Reasoning Challenge (25-Shot)|69.54|
|HellaSwag (10-Shot) |87.33|
|MMLU (5-Shot) |63.25|
|TruthfulQA (0-shot) |61.35|
|Winogrande (5-shot) |81.29|
|GSM8k (5-shot) |17.97|
|
migtissera/Tess-10.7B-v1.5 | migtissera | 2024-03-04T21:59:46Z | 730 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-27T06:09:21Z | ---
license: apache-2.0
model-index:
- name: Tess-10.7B-v1.5
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-10.7B-v1.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-10.7B-v1.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-10.7B-v1.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 47.43
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-10.7B-v1.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-10.7B-v1.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.36
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-10.7B-v1.5
name: Open LLM Leaderboard
---
# Please use the updated version Tess-10.7B-v1.5b
Tess-10.7B-v1.5b has hyperparameter optimizations. Access at: https://huggingface.co/migtissera/Tess-10.7B-v1.5b
<br>

<br>
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-10.7B-v1.5 was trained on the SOLAR-10.7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Tess-10.7B-v1.5)
| Metric |Value|
|---------------------------------|----:|
|Avg. |66.55|
|AI2 Reasoning Challenge (25-Shot)|65.02|
|HellaSwag (10-Shot) |84.07|
|MMLU (5-Shot) |65.09|
|TruthfulQA (0-shot) |47.43|
|Winogrande (5-shot) |83.35|
|GSM8k (5-shot) |54.36|
|
notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1 | notadib | 2024-04-11T14:46:13Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"conversational",
"arxiv:2310.06825",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T01:37:05Z | ---
license: apache-2.0
tags:
- finetuned
pipeline_tag: text-generation
inference: false
model-index:
- name: Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 68.11
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 38.89
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1
name: Open LLM Leaderboard
---
# Model Card for Mistral-7B-Instruct-v0.2
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_notadib__Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |65.48|
|AI2 Reasoning Challenge (25-Shot)|63.05|
|HellaSwag (10-Shot) |84.88|
|MMLU (5-Shot) |60.84|
|TruthfulQA (0-shot) |68.11|
|Winogrande (5-shot) |77.11|
|GSM8k (5-shot) |38.89|
|
notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30 | notadib | 2024-03-14T16:42:38Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"conversational",
"arxiv:2310.06825",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T15:39:38Z | ---
license: apache-2.0
tags:
- finetuned
pipeline_tag: text-generation
inference: false
model-index:
- name: Mistral-7B-Instruct-v0.2-attention-sparsity-30
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.97
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.49
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 39.42
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30
name: Open LLM Leaderboard
---
# Model Card for Mistral-7B-Instruct-v0.2
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_notadib__Mistral-7B-Instruct-v0.2-attention-sparsity-30)
| Metric |Value|
|---------------------------------|----:|
|Avg. |65.51|
|AI2 Reasoning Challenge (25-Shot)|62.97|
|HellaSwag (10-Shot) |84.71|
|MMLU (5-Shot) |60.49|
|TruthfulQA (0-shot) |67.49|
|Winogrande (5-shot) |77.98|
|GSM8k (5-shot) |39.42|
|
eren23/FrankenBeagle-SmallOverlap-test | eren23 | 2024-03-05T15:45:57Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"conversational",
"base_model:mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T15:55:32Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- mlabonne/NeuralBeagle14-7B
base_model:
- mlabonne/NeuralBeagle14-7B
- mlabonne/NeuralBeagle14-7B
model-index:
- name: FrankenBeagle-SmallOverlap-test
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/FrankenBeagle-SmallOverlap-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/FrankenBeagle-SmallOverlap-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.71
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/FrankenBeagle-SmallOverlap-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.69
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/FrankenBeagle-SmallOverlap-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/FrankenBeagle-SmallOverlap-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/FrankenBeagle-SmallOverlap-test
name: Open LLM Leaderboard
---
# FrankenBeagle-SmallOverlap-test
FrankenBeagle-SmallOverlap-test is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 24]
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [18, 32]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "eren23/FrankenBeagle-SmallOverlap-test"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eren23__FrankenBeagle-SmallOverlap-test)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.30|
|AI2 Reasoning Challenge (25-Shot)|72.01|
|HellaSwag (10-Shot) |88.16|
|MMLU (5-Shot) |64.71|
|TruthfulQA (0-shot) |69.69|
|Winogrande (5-shot) |81.85|
|GSM8k (5-shot) |63.38|
|
Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test.2 | Josephgflowers | 2024-01-29T00:12:24Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T23:39:03Z | ---
license: mit
---
1.3B test of Cinder trained on math and step by step reasoning. Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. It is built on the TinyLlama 1.1B parameter model and trained on a unique combination of datasets. |
cloudyu/19B_TRUTH_DPO | cloudyu | 2024-02-02T00:47:14Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-30T12:13:39Z | ---
license: cc-by-nc-4.0
---
* [This is DPO improved version of cloudyu/Mixtral_11Bx2_MoE_19B](https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B)
* [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer)
* metrics not test!
* |
ibivibiv/multimaster-7b-v2 | ibivibiv | 2024-03-04T23:44:28Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"moerge",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-02T14:56:24Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- moe
- moerge
model-index:
- name: multimaster-7b-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.63
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v2
name: Open LLM Leaderboard
---
# Model Card for MultiMaster 5x7B

Version 2 of a general purpose model for knowledge.
## Model Details
### Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [ibivibiv](https://huggingface.co/ibivibiv)
- **Funded by:** [ibivibiv](https://huggingface.co/ibivibiv)
- **Shared by:** [ibivibiv](https://huggingface.co/ibivibiv)
- **Model type:** Mixtral
- **Language(s) (NLP):** English
- **License:** Apache 2
## Uses
General Knowledge
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
## Evaluation
Coming Soon
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ibivibiv__multimaster-7b-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.33|
|AI2 Reasoning Challenge (25-Shot)|70.48|
|HellaSwag (10-Shot) |87.59|
|MMLU (5-Shot) |65.09|
|TruthfulQA (0-shot) |60.63|
|Winogrande (5-shot) |84.29|
|GSM8k (5-shot) |71.87|
|
saishf/West-Maid-7B | saishf | 2024-03-04T14:33:29Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:senseable/WestLake-7B-v2",
"base_model:NeverSleep/Noromaid-7B-0.4-DPO",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-03T01:37:23Z | ---
license: cc-by-nc-4.0
tags:
- mergekit
- merge
base_model:
- senseable/WestLake-7B-v2
- NeverSleep/Noromaid-7B-0.4-DPO
model-index:
- name: West-Maid-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.24
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Maid-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.44
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Maid-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Maid-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.0
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Maid-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Maid-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/West-Maid-7B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
* [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: senseable/WestLake-7B-v2
layer_range: [0, 32]
- model: NeverSleep/Noromaid-7B-0.4-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: senseable/WestLake-7B-v2
parameters:
t:
- filter: self_attn
value: [0.6, 0.7, 0.8, 0.9, 1]
- filter: mlp
value: [0.4, 0.3, 0.2, 0.1, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_saishf__West-Maid-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.09|
|AI2 Reasoning Challenge (25-Shot)|67.24|
|HellaSwag (10-Shot) |86.44|
|MMLU (5-Shot) |64.85|
|TruthfulQA (0-shot) |51.00|
|Winogrande (5-shot) |82.72|
|GSM8k (5-shot) |62.32|
|
Radiantloom/radiantloom-mixtral-8x7b-fusion-dpo | Radiantloom | 2024-02-05T09:09:01Z | 730 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"Mixtral 8x7B",
"Mistral",
"merge",
"moe",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T02:21:43Z | ---
library_name: transformers
tags:
- Mixtral 8x7B
- Mistral
- merge
- moe
license: apache-2.0
---
<img src="https://huggingface.co/aigeek0x0/radiantloom-mixtral-8x7b-fusion/resolve/main/Radiantloom-Mixtral-8x7B-Fusion.png" alt="Radiantloom Mixtral 8X7B Fusion" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
## Radiantloom Mixtral 8X7B Fusion DPO
This model is a finetuned version of [Radiantloom Mixtral 8X7B Fusion](https://huggingface.co/Radiantloom/radiantloom-mixtral-8x7b-fusion). It was finetuned using Direct Preference Optimization (DPO). |
ibivibiv/multimaster-7b-v3 | ibivibiv | 2024-03-04T23:43:52Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T04:27:45Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- moe
model-index:
- name: multimaster-7b-v3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 59.7
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v3
name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ibivibiv__multimaster-7b-v3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.07|
|AI2 Reasoning Challenge (25-Shot)|70.39|
|HellaSwag (10-Shot) |87.65|
|MMLU (5-Shot) |65.07|
|TruthfulQA (0-shot) |59.70|
|Winogrande (5-shot) |84.06|
|GSM8k (5-shot) |71.57|
|
Xenon1/Xenon-4 | Xenon1 | 2024-02-04T05:01:42Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Xenon-4",
"conversational",
"en",
"arxiv:2401.10020",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T04:57:37Z | ---
language:
- en
license: apache-2.0
tags:
- mistral
- Xenon-4
pipeline_tag: text-generation
---
# Model Card for Xenon-4
Mistral-7B-v0.1 model fine-tuned on the Ultrafeedback dataset using techinques shown in the paper [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Xenon1/Xenon-4")
tokenizer = AutoTokenizer.from_pretrained("Xenon1/Xenon-4")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer |
nxn1231/yi6 | nxn1231 | 2024-02-05T15:17:49Z | 730 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-05T08:24:24Z | ---
license: mit
---
Base model: 01-ai/Yi-6B
The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI.
🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,
For English language capability, the Yi series models ranked 2nd (just behind GPT-4), outperforming other LLMs (such as LLaMA2-chat-70B, Claude 2, and ChatGPT) on the AlpacaEval Leaderboard in Dec 2023.
For Chinese language capability, the Yi series models landed in 2nd place (following GPT-4), surpassing other LLMs (such as Baidu ERNIE, Qwen, and Baichuan) on the SuperCLUE in Oct 2023.
🙏 (Credits to LLaMA) Thanks to the Transformer and LLaMA open-source communities, as they reducing the efforts required to build from scratch and enabling the utilization of the same tools within the AI ecosystem.
If you're interested in Yi's adoption of LLaMA architecture and license usage policy, see Yi's relation with LLaMA. ⬇️ |
shuvom/yuj-v1 | shuvom | 2024-03-07T14:17:30Z | 730 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"hindi",
"english",
"Llama2",
"ai4bharat/Airavata",
"BhabhaAI/Gajendra-v0.1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-06T08:11:57Z | ---
license: apache-2.0
tags:
- merge
- hindi
- english
- Llama2
- ai4bharat/Airavata
- BhabhaAI/Gajendra-v0.1
model-index:
- name: yuj-v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 45.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shuvom/yuj-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 70.1
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shuvom/yuj-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shuvom/yuj-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.69
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shuvom/yuj-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shuvom/yuj-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 4.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shuvom/yuj-v1
name: Open LLM Leaderboard
---
# The Model yuj-v1:
The yuj-v1 model is a blend of advanced models strategically crafted to enhance Hindi Language Models (LLMs) effectively and democratically. Its primary goals include catalyzing the development of Hindi and its communities, making significant contributions to linguistic knowledge. The term "yuj," from Sanskrit, signifies fundamental unity, highlighting the integration of sophisticated technologies to improve the language experience for users in the Hindi-speaking community.
Official GGUF version: [shuvom/yuj-v1-GGUF](https://huggingface.co/shuvom/yuj-v1-GGUF)
Below are the model which are leverage to build this yuj-v1:
* [ai4bharat/Airavata](https://huggingface.co/ai4bharat/Airavata)
* [BhabhaAI/Gajendra-v0.1](https://huggingface.co/BhabhaAI/Gajendra-v0.1)
## ☄️Space to use it (yuj-v1 tryO):
<a target="_blank" href="https://shuvom-yuj-v1-tryo.hf.space">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 💻 Usage:
First, you need to install some of below packages:
1. Bits and bytes
```python
!pip install bitsandbytes
```
2. Accelerate (to install the latest version)
```python
!pip install git+https://github.com/huggingface/accelerate.git
```
3. Usage
```python
# Usage
import torch
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
# load the model in 4-bit quantization
tokenizer = AutoTokenizer.from_pretrained("shuvom/yuj-v1")
model = AutoModelForCausalLM.from_pretrained("shuvom/yuj-v1",torch_dtype=torch.bfloat16,load_in_4bit=True)
prompt = "युज शीर्ष द्विभाषी मॉडल में से एक है"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=65)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
4. Output
```python
युज शीर्ष द्विभाषी मॉडल में से एक है। यह एक उत्पादक मॉडल है जो एक साथ एक ट्रांसफॉर्मर और एक आत्म-ध्यान तंत्रिका नेटवर्क को जोड़ता है। यह एक ट्रांसफॉर्मर वास्तुकला का उपयोग करता है जो एक ट्रांसफॉर्मर मॉडल की तुलना में बहुत अधिक जटिल है।
```
## 🧩 Configuration
```yaml
models:
- model: sarvamai/OpenHathi-7B-Hi-v0.1-Base
# no parameters necessary for base model
- model: ai4bharat/Airavata
parameters:
density: 0.5
weight: 0.5
- model: BhabhaAI/Gajendra-v0.1
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: sarvamai/OpenHathi-7B-Hi-v0.1-Base
parameters:
normalize: true
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_shuvom__yuj-v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |45.97|
|AI2 Reasoning Challenge (25-Shot)|45.65|
|HellaSwag (10-Shot) |70.10|
|MMLU (5-Shot) |43.78|
|TruthfulQA (0-shot) |41.69|
|Winogrande (5-shot) |69.85|
|GSM8k (5-shot) | 4.78|
|
Gille/StrangeMerges_23-7B-slerp | Gille | 2024-03-04T21:53:14Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"paulml/OGNO-7B",
"Gille/StrangeMerges_21-7B-slerp",
"base_model:paulml/OGNO-7B",
"base_model:Gille/StrangeMerges_21-7B-slerp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-13T02:53:17Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- paulml/OGNO-7B
- Gille/StrangeMerges_21-7B-slerp
base_model:
- paulml/OGNO-7B
- Gille/StrangeMerges_21-7B-slerp
model-index:
- name: StrangeMerges_23-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_23-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_23-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_23-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 75.13
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_23-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_23-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.28
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_23-7B-slerp
name: Open LLM Leaderboard
---
# StrangeMerges_23-7B-slerp
StrangeMerges_23-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [paulml/OGNO-7B](https://huggingface.co/paulml/OGNO-7B)
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: paulml/OGNO-7B
layer_range: [0, 32]
- model: Gille/StrangeMerges_21-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: paulml/OGNO-7B
parameters:
t:
- filter: self_attn
value: [0.7, 0.5, 0.3, 0.5, 0.7]
- filter: mlp
value: [0.3, 0.5, 0.7, 0.5, 0.3]
- value: 0.45
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_23-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_23-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.17|
|AI2 Reasoning Challenge (25-Shot)|73.55|
|HellaSwag (10-Shot) |88.90|
|MMLU (5-Shot) |64.87|
|TruthfulQA (0-shot) |75.13|
|Winogrande (5-shot) |84.29|
|GSM8k (5-shot) |70.28|
|
sethuiyer/Aika-7B | sethuiyer | 2024-03-07T07:20:51Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"dataset:Anthropic/hh-rlhf",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:SanjiWatsuki/Silicon-Maid-7B",
"base_model:Guilherme34/Samantha-v2",
"base_model:jan-hq/stealth-v1.3",
"base_model:mitultiwari/mistral-7B-instruct-dpo",
"base_model:senseable/WestLake-7B-v2",
"license:cc",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-16T13:58:31Z | ---
language:
- en
license: cc
library_name: transformers
tags:
- mergekit
- merge
datasets:
- Anthropic/hh-rlhf
base_model:
- SanjiWatsuki/Silicon-Maid-7B
- Guilherme34/Samantha-v2
- jan-hq/stealth-v1.3
- mitultiwari/mistral-7B-instruct-dpo
- senseable/WestLake-7B-v2
model-index:
- name: sethuiyer/Aika-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.36
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 53.91
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.22
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
name: Open LLM Leaderboard
---
# Aika-7B
<p align="center">
<img src="https://huggingface.co/sethuiyer/Aika-7B/resolve/main/aika.webp" height="128px" alt="Aika">
</p>
Aika is a language model constructed using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mitultiwari/mistral-7B-instruct-dpo](https://huggingface.co/mitultiwari/mistral-7B-instruct-dpo) as a base. Aika is designed to interact with users in a way that feels natural and human-like, to solve problems and answer questions with a high degree of accuracy and truthfulness, and to engage in creative and logical tasks with proficiency.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
* [Guilherme34/Samantha-v2](https://huggingface.co/Guilherme34/Samantha-v2)
* [jan-hq/stealth-v1.3](https://huggingface.co/jan-hq/stealth-v1.3)
* [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
The base model is Mistral-7Bv0.1 fine tuned on [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf).
### Why?
- **Base model tuned on Anthropic RLHF dataset**: Safe AI as a base model, to balance the uncensored model below.
- **Silicon-Maid-7B**: Boasts excellent multi-turn conversational skills and logical coherence, ensuring smooth interactions.
- **Samantha-V2**: Offers empathy and human-like responses, equipped with programmed "self-awareness" for a more personalized experience.
- **Stealth-V1.3**: Known for enhancing performance in merges when integrated as a component, optimizing Aika's functionality.
- **WestLake-7B-V2**: Sets a high benchmark for emotional intelligence (EQ) and excels in creative writing, enhancing Aika's ability to understand and respond to your needs.
Combine them all

[Source](https://powerpuffgirls.fandom.com/wiki/The_Powerpuff_Girls_theme_song?file=Professor_Utonium_Mixing_Stew.png)
You get Aika - a considerate, personal digital assistant.
### Configuration
Please check [mergekit_config.yml](https://huggingface.co/sethuiyer/Aika-7B/blob/main/mergekit_config.yml) for the merge config.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__Aika-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |59.25|
|AI2 Reasoning Challenge (25-Shot)|65.36|
|HellaSwag (10-Shot) |81.49|
|MMLU (5-Shot) |53.91|
|TruthfulQA (0-shot) |51.22|
|Winogrande (5-shot) |77.74|
|GSM8k (5-shot) |25.78|
|
chasedreaminf/Dream-7B-slerp | chasedreaminf | 2024-02-16T18:19:41Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"ignos/Mistral-T5-7B-v1",
"Toten5/Marcoroni-neural-chat-7B-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-16T18:17:42Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- ignos/Mistral-T5-7B-v1
- Toten5/Marcoroni-neural-chat-7B-v2
---
# Dream-7B-slerp
Dream-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [ignos/Mistral-T5-7B-v1](https://huggingface.co/ignos/Mistral-T5-7B-v1)
* [Toten5/Marcoroni-neural-chat-7B-v2](https://huggingface.co/Toten5/Marcoroni-neural-chat-7B-v2)
## 🧩 Configuration
\```yaml<_io.TextIOWrapper name='./merge/mergekit_config.yml' mode='r' encoding='utf-8'>\``` |
macadeliccc/SmaugDolphin-60B | macadeliccc | 2024-03-04T19:20:46Z | 730 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T03:52:39Z | ---
license: other
license_name: yi-license
license_link: https://huggingface.co/cognitivecomputations/dolphin-2.2-yi-34b-200k/blob/main/LICENSE
model-index:
- name: SmaugDolphin-60B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SmaugDolphin-60B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SmaugDolphin-60B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SmaugDolphin-60B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.44
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SmaugDolphin-60B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SmaugDolphin-60B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SmaugDolphin-60B
name: Open LLM Leaderboard
---
# Smaug Dolphin 60B

This model is a MoErge of [abacusai/Smaug-34B-v0.1](https://huggingface.co/abacusai/Smaug-34B-v0.1) and [cognitivecomputations/dolphin-2.2-yi-34b-200k](https://huggingface.co/cognitivecomputations/dolphin-2.2-yi-34b-200k)
This model works as expected. Evaluations are running now.
## GGUF + iMatrix
Available [here]()
## AWQ
TODO
## Example output


# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__SmaugDolphin-60B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.44|
|AI2 Reasoning Challenge (25-Shot)|73.38|
|HellaSwag (10-Shot) |86.55|
|MMLU (5-Shot) |76.78|
|TruthfulQA (0-shot) |67.44|
|Winogrande (5-shot) |83.50|
|GSM8k (5-shot) |70.96|
|
abideen/MonarchCoder-7B | abideen | 2024-03-10T16:10:47Z | 730 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0",
"mlabonne/AlphaMonarch-7B",
"conversational",
"en",
"base_model:Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0",
"base_model:mlabonne/AlphaMonarch-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-22T19:14:40Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0
- mlabonne/AlphaMonarch-7B
base_model:
- Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0
- mlabonne/AlphaMonarch-7B
model-index:
- name: MonarchCoder-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.52
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.3
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.65
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.21
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.19
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.13
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
name: Open LLM Leaderboard
language:
- en
library_name: transformers
---
# MonarchCoder-7B

MonarchCoder-7B is a slerp merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0](https://huggingface.co/Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0)
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
The main aim behind creating this model is to create a model that performs well in reasoning, conversation, and coding. AlphaMonarch pperforms amazing on reasoning and conversation tasks. Merging AlphaMonarch with a coding model yielded MonarchCoder-7B which performs better on OpenLLM, Nous, and HumanEval benchmark. Although [MonarchCoder-2x7B](abideen/MonarchCoder-MoE-2x7B) performs better than MonarchCoder-7B.
## 🏆 Evaluation results
```
| Metric |MonarchCoder-Moe-2x7B||MonarchCoder-7B||AlphaMonarch|
|---------------------------------|---------------------|-----------------|------------|
|Avg. | 74.23 | 71.17 | 75.99 |
|HumanEval | 41.15 | 39.02 | 34.14 |
|HumanEval+ | 29.87 | 31.70 | 29.26 |
|MBPP | 40.60 | * | * |
|AI2 Reasoning Challenge (25-Shot)| 70.99 | 68.52 | 73.04 |
|HellaSwag (10-Shot) | 87.99 | 87.30 | 89.18 |
|MMLU (5-Shot) | 65.11 | 64.65 | 64.40 |
|TruthfulQA (0-shot) | 71.25 | 61.21 | 77.91 |
|Winogrande (5-shot) | 80.66 | 80.19 .| 84.69 |
|GSM8k (5-shot) . | 69.37 | 65.13 | 66.72 |
```
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0
layer_range: [0, 32]
- model: mlabonne/AlphaMonarch-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/AlphaMonarch-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "abideen/MonarchCoder-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
nnethercott/llava-v1.5-7b_vicuna | nnethercott | 2024-03-06T14:46:26Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-25T07:26:37Z | ---
license: llama2
model-index:
- name: llava-v1.5-7b_vicuna
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 52.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b_vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 76.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b_vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b_vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.86
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b_vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b_vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 15.31
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nnethercott/llava-v1.5-7b_vicuna
name: Open LLM Leaderboard
---
## Model details
**Motivation**
This models contains the fine-tuned weights from `liuhaotian/llava-v1.5-7b` so LLM benchmarking can be done.
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 450K academic-task-oriented VQA data mixture.
- 40K ShareGPT data.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nnethercott__llava-v1.5-7b_vicuna)
| Metric |Value|
|---------------------------------|----:|
|Avg. |52.28|
|AI2 Reasoning Challenge (25-Shot)|52.65|
|HellaSwag (10-Shot) |76.09|
|MMLU (5-Shot) |51.68|
|TruthfulQA (0-shot) |45.86|
|Winogrande (5-shot) |72.06|
|GSM8k (5-shot) |15.31|
|
macadeliccc/gemma-orchid-7b-dpo | macadeliccc | 2024-03-04T16:33:51Z | 730 | 6 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"dataset:Thermostatic/flowers",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:Intel/orca_dpo_pairs",
"dataset:glaiveai/glaive-function-calling-v2",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-26T02:49:57Z | ---
license: other
datasets:
- Thermostatic/flowers
- jondurbin/truthy-dpo-v0.1
- Intel/orca_dpo_pairs
- glaiveai/glaive-function-calling-v2
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
model-index:
- name: gemma-orchid-7b-dpo
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.27
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/gemma-orchid-7b-dpo
name: Open LLM Leaderboard
---
# Gemma Orchid 7b
<div align="center">

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
</div>
This model is the second checkpoint of a future project. Its capable of function calling as well as having a strong base in communicational skills.
This model has been finetuned on roughly 80k samples so far.
# Training
+ Time to complete: ~20 hours
+ Datasets: Thermostatic/flowers, Intel/orca_dpo_pairs, jondurbin/truthy-dpo-v0.1, glaiveai/glaive_function_calling_v2
+ Evaluation loss: 0.69
+ Method: LoRa
+ Prompt Format: ChatML
Thermostatic/flowers is a blend of open source model generations formatted in ShareGPT. It also includes all of capybara.
This model has been exposed to a wide variety of data. [macadeliccc/gemma-function-calling-7b](https://huggingface.co/macadeliccc/gemma-function-calling-7b) is suitable to finetune further with the dataset of your choosing.
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/gemma-orchid-7b-dpo")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/gemma-orchid-7b-dpo", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Evaluations
In progress
## ExLlamaV2
Available [here](https://huggingface.co/bartowski/gemma-orchid-7b-dpo-exl2)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__gemma-orchid-7b-dpo)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.37|
|AI2 Reasoning Challenge (25-Shot)|62.88|
|HellaSwag (10-Shot) |80.95|
|MMLU (5-Shot) |61.41|
|TruthfulQA (0-shot) |53.27|
|Winogrande (5-shot) |77.51|
|GSM8k (5-shot) |50.19|
|
ChaoticNeutrals/Prodigy_7B | ChaoticNeutrals | 2024-03-04T20:10:35Z | 730 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"base_model:ChaoticNeutrals/This_is_fine_7B",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-26T05:27:17Z | ---
license: other
library_name: transformers
tags:
- mergekit
- merge
base_model:
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- ChaoticNeutrals/This_is_fine_7B
model-index:
- name: Prodigy_7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prodigy_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prodigy_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.92
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prodigy_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 68.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prodigy_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prodigy_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prodigy_7B
name: Open LLM Leaderboard
---
# Wing
GGUF available here: https://huggingface.co/Lewdiculous/Prodigy_7B-GGUF-Imatrix
Big thanks to https://huggingface.co/Lewdiculous

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
* [ChaoticNeutrals/This_is_fine_7B](https://huggingface.co/ChaoticNeutrals/This_is_fine_7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: ChaoticNeutrals/This_is_fine_7B
layer_range: [0, 32]
- model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
layer_range: [0, 32]
merge_method: slerp
base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChaoticNeutrals__Prodigy_7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.68|
|AI2 Reasoning Challenge (25-Shot)|71.59|
|HellaSwag (10-Shot) |88.09|
|MMLU (5-Shot) |64.92|
|TruthfulQA (0-shot) |68.57|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |64.37|
|
DreadPoor/RoyalMaid-7B-slerp | DreadPoor | 2024-03-07T12:56:37Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"DreadPoor/WestMonarchLasers-7B-slerp",
"DreadPoor/KunoMaid-7B-slerp",
"base_model:DreadPoor/WestMonarchLasers-7B-slerp",
"base_model:DreadPoor/KunoMaid-7B-slerp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-27T23:31:26Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- DreadPoor/WestMonarchLasers-7B-slerp
- DreadPoor/KunoMaid-7B-slerp
base_model:
- DreadPoor/WestMonarchLasers-7B-slerp
- DreadPoor/KunoMaid-7B-slerp
model-index:
- name: RoyalMaid-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/RoyalMaid-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/RoyalMaid-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/RoyalMaid-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 64.18
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/RoyalMaid-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/RoyalMaid-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.55
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/RoyalMaid-7B-slerp
name: Open LLM Leaderboard
---
# RoyalMaid-7B-slerp
RoyalMaid-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [DreadPoor/WestMonarchLasers-7B-slerp](https://huggingface.co/DreadPoor/WestMonarchLasers-7B-slerp)
* [DreadPoor/KunoMaid-7B-slerp](https://huggingface.co/DreadPoor/KunoMaid-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: DreadPoor/WestMonarchLasers-7B-slerp
layer_range: [0, 32]
- model: DreadPoor/KunoMaid-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: DreadPoor/WestMonarchLasers-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DreadPoor/RoyalMaid-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DreadPoor__RoyalMaid-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.75|
|AI2 Reasoning Challenge (25-Shot)|70.39|
|HellaSwag (10-Shot) |87.25|
|MMLU (5-Shot) |64.72|
|TruthfulQA (0-shot) |64.18|
|Winogrande (5-shot) |82.40|
|GSM8k (5-shot) |67.55|
|
CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo | CorticalStack | 2024-02-28T20:49:34Z | 730 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dpo",
"conversational",
"base_model:CorticalStack/neurotic-crown-clown-7b-ties",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-28T18:34:57Z | ---
license: apache-2.0
tags:
- dpo
base_model:
- CorticalStack/neurotic-crown-clown-7b-ties
dataset:
- CorticalStack/tak-stack-dpo
---
<img src="neurotic-crown-clown-tak-stack.png" alt="Neurotic crown clown tak stack logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# neurotic-crown-clown-7b-tak-stack-dpo
neurotic-crown-clown-7b-tak-stack-dpo is a DPO fine-tuned version of [CorticalStack/neurotic-crown-clown-7b-ties](https://huggingface.co/CorticalStack/neurotic-crown-clown-7b-ties) using the [CorticalStack/tak-stack-dpo](https://huggingface.co/datasets/CorticalStack/tak-stack-dpo) dataset.
### LoRA
- r: 32
- LoRA alpha: 32
- LoRA dropout: 0.05
### Training arguments
- Batch size: 4
- Gradient accumulation steps: 4
- Optimizer: paged_adamw_32bit
- Max steps: 100
- Learning rate: 5e-05
- Learning rate scheduler type: cosine
- Beta: 0.1
- Max prompt length: 1024
- Max length: 1536 |
mayacinka/yam-jom-7B | mayacinka | 2024-03-02T20:39:44Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2",
"yam-peleg/Experiment26-7B",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2",
"base_model:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-02T05:56:17Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
- yam-peleg/Experiment26-7B
base_model:
- eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
- yam-peleg/Experiment26-7B
model-index:
- name: yam-jom-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 78.04
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-jom-7B
name: Open LLM Leaderboard
---
# yam-jom-7B
yam-jom-7B is a task arithmetic merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
## 🧩 Configuration
```yaml
models:
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
parameters:
weight: 0.35
- model: yam-peleg/Experiment26-7B
parameters:
weight: 0.65
base_model: yam-peleg/Experiment26-7B
merge_method: task_arithmetic
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mayacinka/yam-jom-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__yam-jom-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.60|
|AI2 Reasoning Challenge (25-Shot)|73.38|
|HellaSwag (10-Shot) |89.15|
|MMLU (5-Shot) |64.51|
|TruthfulQA (0-shot) |78.04|
|Winogrande (5-shot) |84.93|
|GSM8k (5-shot) |69.60|
|
giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1 | giraffe176 | 2024-03-03T20:34:57Z | 730 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"not-for-all-audiences",
"en",
"arxiv:2311.03099",
"arxiv:2306.01708",
"arxiv:2306.05685",
"arxiv:2312.06281",
"base_model:cognitivecomputations/WestLake-7B-v2-laser",
"base_model:NeverSleep/Noromaid-7B-0.4-DPO",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:Intel/neural-chat-7b-v3-3",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-02T18:35:41Z | ---
arxiv:
language:
- en
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
base_model:
- cognitivecomputations/WestLake-7B-v2-laser
- NeverSleep/Noromaid-7B-0.4-DPO
- teknium/OpenHermes-2.5-Mistral-7B
- mistralai/Mistral-7B-v0.1
- Intel/neural-chat-7b-v3-3
model-index:
- name: WestLake_Noromaid_OpenHermes_neural-chatv0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: EQ-Bench
type: eq-bench
config: EQ-Bench
split: v2
args:
num_few_shot: 1
metrics:
- type: acc_norm
value: 65.56
name: normalized accuracy
source:
url: https://github.com/EQ-bench/EQ-Bench
name: EQ-Bench v2.1
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.37
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.5
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.2
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1
name: Open LLM Leaderboard
---
# WestLake_Noromaid_OpenHermes_neural-chatv0.1
<img src="https://cdn-uploads.huggingface.co/production/uploads/655a9883cbbaec115c3fd6b3/ElrkYfCq7kNW9zxZhXWEz.png" alt="drawing" width="800"/>
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). DPO training data has been used to slightly uncensor the LLM.
The model's focus is in conversational roleplay. In limited testing, I've been very happy with the result. It has been able to pick up stories where other models have failed or started to loop their responses, and it seems to pace the story well.
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser)
* [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
* [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: cognitivecomputations/WestLake-7B-v2-laser
parameters:
density: 0.55
weight: 0.15
- model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
density: 0.55
weight: 0.35
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
density: 0.55
weight: 0.30
- model: Intel/neural-chat-7b-v3-3
parameters:
density: 0.55
weight: 0.20
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
### Benchmark Testing
<img src="https://cdn-uploads.huggingface.co/production/uploads/655a9883cbbaec115c3fd6b3/sO_QybG17FYdT47FyAcMs.png" alt="drawing" width="800"/>
| | MT-Bench | EQ-Bench v2.1 |
|---------------------------------------------------------|---------------------------------------------|---------------------------------------------|
| giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1 | 7.171875 | 65.56 |
| | [(Paper)](https://arxiv.org/abs/2306.05685) | [(Paper)](https://arxiv.org/abs/2312.06281) |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_giraffe176__WestLake_Noromaid_OpenHermes_neural-chatv0.1)
| | Avg. | AI2 (25-Shot) | HellaSwag (10-Shot) | MMLU (5-Shot) | TruthfulQA (0-shot) | Winogrande (5-shot) | GSM8k (5-shot) |
|:-----------------------------------------:|-------|-----------------------------------|---------------------|---------------|---------------------|---------------------|----------------|
| This model | 68.86 | 66.72 | 85.37 | 64.67 | 51.50 | 79.72 | 65.20 |
| cognitivecomputations/WestLake-7B-v2-laser| **74.78** | 73.29 | **88.66** | **64.72** | **67.04** | **86.74** | **68.23** |
| NeverSleep/Noromaid-7B-0.4-DPO | 59.08 | 62.29 | 84.32 | 63.2 | 42.28 | 76.95 | 25.47 |
| teknium/OpenHermes-2.5-Mistral-7B | 61.52 | 64.93 | 84.18 | 63.64 | 52.24 | 78.06 | 26.08 |
| Intel/neural-chat-7b-v3-3 | 69.83 | **66.89** | 85.26 | 63.07 | 63.01 | 79.64 | 61.11 |
### DPO training data used:
- unalignment/toxic-dpo-v0.2 (Curated version)
|
MSL7/INEX12-7b | MSL7 | 2024-03-03T10:28:41Z | 730 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/merge2",
"yam-peleg/Experiment26-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-03T10:24:09Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- liminerity/merge2
- yam-peleg/Experiment26-7B
---
# INEX12-7b
INEX12-7b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [liminerity/merge2](https://huggingface.co/liminerity/merge2)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/merge2
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Eric111/CatunaLaserPi | Eric111 | 2024-03-03T21:00:48Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Eric111/caTUNABeagle",
"BryanSwk/LaserPipe-7B-SLERP",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-03T18:54:10Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- Eric111/caTUNABeagle
- BryanSwk/LaserPipe-7B-SLERP
---
# CatunaLaserPi
CatunaLaserPi is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Eric111/caTUNABeagle](https://huggingface.co/Eric111/caTUNABeagle)
* [BryanSwk/LaserPipe-7B-SLERP](https://huggingface.co/BryanSwk/LaserPipe-7B-SLERP)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Eric111/caTUNABeagle
layer_range: [0, 32]
- model: BryanSwk/LaserPipe-7B-SLERP
layer_range: [0, 32]
merge_method: slerp
base_model: Eric111/caTUNABeagle
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Gille/StrangeMerges_30-7B-slerp | Gille | 2024-03-18T03:33:24Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_21-7B-slerp",
"yam-peleg/Experiment26-7B",
"base_model:Gille/StrangeMerges_21-7B-slerp",
"base_model:yam-peleg/Experiment26-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-04T23:26:26Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_21-7B-slerp
- yam-peleg/Experiment26-7B
base_model:
- Gille/StrangeMerges_21-7B-slerp
- yam-peleg/Experiment26-7B
---
# StrangeMerges_30-7B-slerp
Given the benchmark score distribution this model might jump several spots if trained on something like orca-math or thruthy datasets.
Anyone got a good walkthrough vid with about how long training takes/colab costs, etc?
StrangeMerges_30-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Gille/StrangeMerges_21-7B-slerp
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Gille/StrangeMerges_21-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_30-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
CorticalStack/mistral-7b-distilabel-truthy-dpo | CorticalStack | 2024-03-05T09:17:28Z | 730 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dpo",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-05T09:15:38Z | ---
license: apache-2.0
tags:
- dpo
base_model:
- mistralai/Mistral-7B-v0.1
dataset:
- mlabonne/distilabel-truthy-dpo-v0.1
---
# mistral-7b-distilabel-truthy-dpo
mistral-7b-distilabel-truthy-dpo is a DPO fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using the [mlabonne/distilabel-truthy-dpo-v0.1](https://huggingface.co/datasets/mlabonne/distilabel-truthy-dpo-v0.1) dataset.
### LoRA
- r: 16
- LoRA alpha: 16
- LoRA dropout: 0.05
### Training arguments
- Batch size: 4
- Gradient accumulation steps: 4
- Optimizer: paged_adamw_32bit
- Max steps: 100
- Learning rate: 5e-05
- Learning rate scheduler type: cosine
- Beta: 0.1
- Max prompt length: 1024
- Max length: 1536 |
damerajee/Gaja-v2.00-dpo | damerajee | 2024-03-05T14:56:14Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"hindi",
"english ",
"Bilingual",
"conversational",
"en",
"hi",
"dataset:sarvamai/samvaad-hi-v1",
"dataset:aaditya/orca_dpo_pairs-Hindi_",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-05T14:37:39Z | ---
language:
- en
- hi
license: llama2
library_name: transformers
tags:
- hindi
- 'english '
- Bilingual
datasets:
- sarvamai/samvaad-hi-v1
- aaditya/orca_dpo_pairs-Hindi_
pipeline_tag: text-generation
---
# Model
# 🐘 Gaja
Gaja is a Hindi/Hinglish chat model, initially trained on SarvamAI's OpenHathi model and further fine-tuned for conversational interactions.
<img src="https://cdn-uploads.huggingface.co/production/uploads/6487239cca30096ea9f52115/sLgskU1h2Tih_bJo0ktYR.jpeg" width="600" alt="Image">
# Additional Information
- It outperforms Airavata, AI4Bharat's chat version, on Huggingface OpenLLM benchmark suite.
- It was fine-tuned on only 1k samples
# 💬 Prompt template
```
<|im_start|>user
{}<|im_end|>
<|im_start|>assistant
{}<|im_end|>
```
# 😎 Features:
- Language Support: Gaja is designed to understand and generate responses in both Hindi and Hinglish, catering to a diverse range of users.
- Base Model: Built upon SarvamAI's OpenHathi model, Gaja inherits its foundational capabilities while being optimized for conversational tasks.
- Fine-tuning: Gaja has undergone fine-tuning specifically for chat-based interactions, enhancing its ability to engage in meaningful conversations with users.
- Experimental Platform: With its flexibility and adaptability, Gaja serves as a valuable platform for conducting experiments and exploring innovative approaches to chatbot development.
|
theNovaAI/Supernova-experimental | theNovaAI | 2024-03-12T22:40:53Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"en",
"base_model:PygmalionAI/pygmalion-2-13b",
"base_model:Undi95/Amethyst-13B",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-08T01:35:41Z | ---
language:
- en
license: cc-by-sa-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- PygmalionAI/pygmalion-2-13b
- Undi95/Amethyst-13B
inference: false
model-index:
- name: Supernova-experimental
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.66
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.59
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 49.37
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.73
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=theNovaAI/Supernova-experimental
name: Open LLM Leaderboard
---
## Supernova-experimental
This is an experimental model that was created for the development of NovaAI.
Good at chatting and some RP.
Quantized model here: [theNovaAI/Supernova-experimental-GPTQ](https://huggingface.co/theNovaAI/Supernova-experimental-GPTQ)
## Prompt Template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
### Models Merged
The following models were included in the merge:
* [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
* [Undi95/Amethyst-13B](https://huggingface.co/Undi95/Amethyst-13B)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_theNovaAI__Supernova-experimental)
| Metric |Value|
|---------------------------------|----:|
|Avg. |59.79|
|AI2 Reasoning Challenge (25-Shot)|63.05|
|HellaSwag (10-Shot) |83.66|
|MMLU (5-Shot) |56.59|
|TruthfulQA (0-shot) |49.37|
|Winogrande (5-shot) |77.35|
|GSM8k (5-shot) |28.73|
|
Severian/Nexus-IKM-Mistral-7B-v5-instruction | Severian | 2024-03-09T19:06:11Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"text-generation",
"conversational",
"dataset:Severian/Internal-Knowledge-Map",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-08T22:44:03Z | ---
license: mit
library_name: transformers
datasets:
- Severian/Internal-Knowledge-Map
pipeline_tag: text-generation
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64740cf7485a7c8e1bd51ac9/GO4MY_3adP2G9EHKZbZpg.webp" width="500" height="500">
**THIS REPO HOLDS THE LATEST EXPERIMENTAL VERSION OF THE NEXUS-MISTRAL-7B. MAY OR MAY NOT WORK BASED ON WHERE I'M AT IN THE TRIALS. USE AT YOUR OWN RISK : )**
This model is the second trained with experimental 'Internal Knowledge Map' dataset. Developed with an aim to go beyond the scope of usual data processing capabilities, this model gets trained to build comprehensive understanding and reasoning in a wide range of knowledge domains with elaborate guidelines. It bases its reasoning on a specially selected dataset emphasizing the interrelations of the diverse disciplines which aim to synthesize, integrate, and apply complex information in ways that mimic humanly abstract reasoning and creative thought processes.
At the very core of the development of this model is the desire to make sure that LLMs engage in a kind of cognitive activity not limited to memory but actually taking on abstract reasoning, problem-solving, and generation of new insights. To achieve this, 'Nexus-IKM-Mistral-7B' has been fine-tuned until convergance using a novel Phased Training appraoch on this unique dataset, which resulted in the model demonstrating greater capability for giving rise to insights and problem-solving in complex, multi-disciplinary settings. This involves improved ability in drawing links between different pieces of knowledge, reasoning through complex scenarios, and proposing innovative solutions that cut across various domains, including science, technology, environmental studies, and humanities.
Test this out and see if you find anything interesting or intriguing. I will keep iterating more versions but this one seems like a fun and useful way to start.
---
# Phased Training Methodology
Leveraging this dataset, we've adopted a phased training methodology that focuses sequentially on different dataset components, namely "System" and "Instruction," across separate training phases. This approach allows models to build layered understandings from general systemic insights to specific instructional cues, enriching their generative output with both broad contextual awareness and detailed, topic-specific knowledge.
**Phase 1: System Focus**
In the initial phase, the model concentrates on the "System" component, absorbing overarching guidelines and objectives. This phase lays the foundational understanding, enabling the model to grasp the contextual framework and systemic knowledge encapsulated in the dataset.
**Phase 2: Instruction Focus**
Building upon the systemic knowledge, the second phase shifts the model's focus to the "Instructions" component. This phase sharpens the model's ability to interpret specific prompts and generate responses that are not only informed by the broader context but also precisely tailored to the instructional cues.
## GGUF Q8 Version: https://huggingface.co/Severian/Nexus-IKM-Mistral-7B-GGUF
**If you'd like to train your own version, here is the full notebook to recreate the training on Unsloth yourself (https://colab.research.google.com/drive/1828t77iO2nLRXVfB8HoI11eFu-79-Oe7?usp=sharing). You'll just have to drop in the train.jsonl from the Dataset repo (https://huggingface.co/datasets/Severian/Internal-Knowledge-Map) into your Colab directory and rename it dataset.jsonl**
---
# Example Outputs vs Base model
## **How can everyday household items be repurposed to solve common gardening problems?**
### Base Mistral-Instruct-v0.2:
There are several ways everyday household items can be repurposed to solve common gardening problems. Here are some ideas:
1. Plastic Bottles: Cut off the top of a plastic bottle and bury it with the bottom in the ground, upside down. This will act as a water reservoir for your plants, helping them stay hydrated during dry spells.
2. Egg Cartons: Use egg cartons to start seedlings indoors. Once the seedlings have grown large enough, you can transplant them directly into the ground, carton and all. The carton will decompose over time, leaving the seedling's roots undisturbed during the transfer.
3. Coffee Grounds: Use coffee grounds as a fertilizer for acid-loving plants such as azaleas, rhododendrons, and hydrangeas. Coffee grounds also help deter pests like slugs and snails.
4. Newspaper: Use newspaper to suppress weeds. Lay down several layers of newspaper over the weeded area, then cover it with a layer of mulch or compost. Over time, the newspaper will decompose and the weeds will not be able to grow through.
5. Baking Soda: Use baking soda as a fungicide for roses and other plants susceptible to powdery mildew. Mix 1 tablespoon of baking soda with 1 quart of water, then spray the solution onto the infected areas. Be sure not to overuse, as too much baking soda can harm the plant.
6. Empty Cans: Puncture the bottoms of empty tin or aluminum cans and bury them in the garden. The cans will act as self-watering planters for herbs or small vegetables like tomatoes. Fill the cans with soil, plant your seeds, then water the can from the top to allow the water to seep out through the bottom and slowly irrigate the roots.
7. Glass Jars: Use glass jars to create DIY terrariums. Fill the jar halfway with pebbles or small stones for drainage, then add a layer of soil and your favorite plants. The sealed jar will create a mini greenhouse effect, allowing the plants to thrive in humid conditions.
8. Old Tires: Repurpose old tires into raised garden beds by filling them with soil and compost. This is an excellent solution for creating additional growing space without breaking the bank. Be sure to drill several holes in the tire to allow water to drain properly.
### Nexus-7B:
Repurposing everyday household items for gardening can be an innovative and cost-effective solution to common gardening problems. Here are some examples of how everyday items can be transformed into useful gardening tools or solutions:
1. Plastic Bottles as Waterers: Cut the bottom off a plastic bottle, poke several small holes in the sides for drainage, and turn it upside down to water your plants. This can be used to water hanging baskets, potted plants, or even in-ground flowers without overwatering them.
2. CDs as Sun Deterrents: Place old CDs around the perimeter of your garden to reflect sunlight and discourage birds from feeding on ripe fruits or vegetables. The bright reflections can also serve as an aesthetic enhancement for your garden.
3. Old Tires as Planters: Stack tires in a sunny location, fill them with soil, and add plants. This repurposed tire planter is ideal for growing larger plants such as tomatoes, cucumbers, or squash. Ensure that the tires are stable and secure before filling with soil.
4. Newspapers as Weed Blockers: Cover the ground around young seedlings or plants with old newspapers to prevent weed growth. This method is particularly useful in gardens where manual weeding is not practical due to the size of the area or physical limitations.
5. Egg Cartons as Seed Starters: Fill each cup of an egg carton with soil and plant seeds, watering them gently. Once the seedlings are large enough, you can transfer the entire carton into a larger pot. This method allows you to start seeds indoors using minimal space and resources.
6. Tin Cans as Seedling Protectors: Cut the bottom off an empty tin can, paint it with non-toxic, weather-resistant paint, and poke several drainage holes in the sides. Once painted and sealed, place the can over young seedlings to protect them from frost or excessive sunlight.
7. Toilet Paper Rolls as Seedling Markers: Write plant names on toilet paper rolls with a waterproof marker, then stick them into the soil next to their corresponding seedlings. This method helps keep track of plants during early growth stages.
8. Coffee Grounds as Fertilizer: Spread coffee grounds around acid-loving plants such as blueberries and tomatoes to add nitrogen to the soil. The addition of coffee grounds also improves soil structure and deters pests.
9. Old Pots as Drainage Hacks: Drill holes in the bottoms of old plastic or ceramic pots and bury them in the ground around larger plants or trees. These buried pots will serve as water reservoirs, providing additional moisture to the roots when natural rainfall is scarce.
10. Kitchen Waste as Compost: Collect fruit and vegetable waste, along with coffee grounds and eggshells, in a designated compost bin. Decompose these items over time to create nutrient-rich soil for gardening, saving money on commercial fertilizers.
### Evaluation
### Depth and Specificity
- **Base Model**: Provides straightforward, commonly known uses for household items in gardening, like using plastic bottles for water reservoirs or egg cartons for seedlings. It covers a broad range of solutions but with a focus on practical, direct applications.
- **Nexus-7B**: Delves into more innovative and less commonly known uses, such as using CDs as sun deterrents or tin cans as seedling protectors. This response demonstrates a deeper understanding and creative thinking in repurposing items for gardening challenges.
### Integration of Knowledge
- **Base Model**: Lists uses in a manner that aligns with general knowledge, lacking in the integration of broader environmental or ecological considerations.
- **Nexus-7B**: Shows an integrated approach, suggesting uses that consider not just the gardening problem at hand but also potential aesthetic enhancements and ecological benefits, such as using coffee grounds as fertilizer for its nitrogen content and soil structure improvement.
### Innovation and Creativity
- **Base Model**: Sticks to more traditional and widely recommended practices. While useful, these solutions do not push the boundary of creative repurposing.
- **Nexus-7B**: Exhibits a higher degree of creativity, especially in solutions like using CDs for bird deterrence or creating a mini-greenhouse effect with glass jars. These suggestions indicate a leap in applying lateral thinking to gardening problems.
### Environmental and Ecological Consideration
- **Nexus-7B**: Implicitly suggests a greater awareness of environmental impact by promoting composting and the reuse of materials in ways that benefit the garden's ecosystem, showing an undercurrent of sustainability that's not as pronounced in the base model's response.
### How to Spot Nuances:
1. **Look for Creativity**: The Nexus-7B model's responses are more creative and less conventional, suggesting a deeper dive into each item's potential uses.
2. **Consider the Depth of Integration**: Nexus-7B integrates broader environmental and sustainability considerations, moving beyond the immediate utility to consider the impact on the garden's ecosystem.
3. **Assess the Innovation Level**: Responses that suggest new or unconventional uses demonstrate the model's ability to synthesize information creatively and offer innovative solutions.
4. **Evaluate Environmental Consciousness**: Nexus-7B's approach reflects a subtle yet significant emphasis on sustainability, indicating an advanced integration of ecological awareness into problem-solving strategies.
---
## **Most people recognize bees for their role in pollination, but their contributions to ecosystems extend far beyond that. Can you elaborate on the less commonly known yet crucial roles that bees play in supporting biodiversity and ecosystem health?**
### Base:
Absolutely! Bees are essential pollinators, but they contribute significantly to various aspects of ecosystem health and biodiversity in ways that are not always recognized. Here are some less commonly known roles bees play in supporting ecosystems:
1. Nectar and pollen collectors: Bees collect nectar and pollen from flowers to feed themselves and their hives. In doing so, they help transport nutrients from the plants to other organisms, creating a complex web of connections between different species. This not only supports the growth and survival of plants but also ensures food sources for various animals that rely on those plants.
2. Decomposers: Bees play a role in decomposition by regurgitating undigested parts of flowers, which are rich in pollen and nectar, as food for their larvae. This process helps break down plant material and recycle nutrients back into the soil, contributing to soil health and fertility.
3. Pest control: Bees contribute to pest control by attracting various natural enemies of insects that damage crops or other plants. For example, bees help pollinate flowers that produce parasitic wasps, which in turn prey on harmful aphids and other pests. By supporting the populations of beneficial insects, bees indirectly protect crops and other vegetation from damaging pests.
4. Seed dispersal: Some bees play a role in seed dispersal by unintentionally carrying seeds on their bodies as they move between flowers. This helps ensure the distribution and growth of various plant species and contributes to ecosystem diversity.
5. Honey production: Bees produce honey, which has numerous uses for humans beyond just consuming it as food. Honey has antibacterial properties and can be used as a natural sweetener in cooking or as an ingredient in various medicinal products. In addition, the production of honey creates economic opportunities for beekeepers and supports local communities.
6. Nutrient cycling: Bees contribute to nutrient cycling by transferring pollen from one flower to another during the pollination process. This helps ensure that plants are able to produce seeds, which in turn allows for new plant growth and contributes to ecosystem health and productivity.
7. Carbon sequestration: Bees play a role in carbon sequestration through their interactions with various plant species. By facilitating pollination and supporting the growth of vegetation, bees contribute to the absorption and storage of carbon dioxide from the atmosphere. This helps mitigate the effects of climate change and supports overall ecosystem health.
8. Supporting food webs: Bees contribute to complex food webs by providing essential resources for various organisms. For example, bee larvae serve as food sources for birds and other predators, while adult bees provide nectar and pollen for a wide variety of insects and other animals. This helps maintain the balance and diversity of various ecosystems.
### Nexus:
Absolutely! While bees are primarily known for their role in pollination, their contributions to ecosystems encompass a much broader range of functions that support biodiversity and ecological health. Here are some of the less commonly recognized yet crucial roles bees play:
1. Seed Dispersal: Bees not only transfer pollen between flowers but also unintentionally pick up and distribute seeds as they forage, aiding in the dispersal of plant species. This is particularly important for plants that have small seeds or rely on insects for seed dispersal.
2. Nitrogen Fixation: Some bee species, like leafcutter bees, play a role in nitrogen fixation by collecting plant material and transporting it back to their nests. This process improves soil health, benefiting the growth of nearby plants.
3. Soil Aeration: Bees contribute to soil aeration through their foraging activities. As they excavate burrows for nesting or collect soil for building cells in their nests, they create small tunnels that improve soil structure and promote water retention.
4. Food Source for Other Organisms: Beyond pollination, bees provide food for a wide range of organisms, including birds, mammals, and other insects. This makes them crucial components of complex food webs within ecosystems.
5. Biological Pest Control: Bees also contribute to biocontrol by preying on pests that threaten crops or natural ecosystems. For example, parasitic bees and wasps lay their eggs in the larvae of various pests, thereby reducing their populations.
6. Medicinal and Pharmaceutical Uses: Bees play a significant role in producing honey and beeswax, which have medicinal and pharmaceutical applications. For instance, honey has antibacterial properties that can help treat wounds, and beeswax is used as a base for cosmetics and in the manufacturing of candles.
7. Biodiversity Indicators: Bees serve as valuable indicators of biodiversity health. Their distribution and abundance reflect the overall health of ecosystems and can signal changes in environmental conditions that may affect other species within their habitats.
8. Cultural and Spiritual Significance: Bees have cultural and spiritual significance in many societies around the world. They are celebrated for their role in agriculture and their contribution to honey production, which is used in various traditional practices and religious ceremonies.
In summary, bees contribute significantly to ecosystems beyond pollination by enhancing soil health, supporting food webs, providing resources for medicinal and pharmaceutical applications, serving as indicators of biodiversity, and playing cultural and spiritual roles within communities. These diverse functions highlight the crucial role bees play in maintaining ecological balance and promoting sustainable agriculture.
### Evaluation
### Depth of Information
- **Base Model**: Offers a broad overview of bees' roles, touching on common but critical functions like nutrient cycling, honey production, and seed dispersal. It provides general explanations that align with widely known facts about bees.
- **Nexus-7B**: Delves deeper into specific and less commonly recognized contributions of bees, such as their role in nitrogen fixation and soil aeration, showcasing a nuanced understanding of ecological processes.
### Breadth and Specificity of Contributions
- **Base Model**: Lists a variety of bees' ecological roles with a focus on their immediate environmental impacts, such as decomposers and participants in pest control. It effectively communicates bees' contributions but within a relatively conventional scope.
- **Nexus-7B**: Expands the conversation to include specific ecological functions, such as bees' contribution to soil aeration and their significance in biocontrol, presenting a more comprehensive picture of bees’ ecological roles. This model's response implies a deeper integration of ecological knowledge, highlighting less obvious but crucial ecosystem services provided by bees.
### Innovative and Unique Insights
- **Base Model**: While informative, the response sticks to more traditional explanations of bees' ecological roles without introducing novel insights or making connections to broader ecological or societal impacts.
- **Nexus-7B**: Introduces innovative aspects of bees’ ecological contributions, like their role in nitrogen fixation and as indicators of biodiversity health, providing insights that reflect a layered understanding of ecosystems. It also notes the cultural and spiritual significance of bees, bridging ecological roles with societal impacts.
### How to Spot Nuances and Understand Impact
1. **Depth and Detail**: Look for responses that provide detailed explanations of processes and mechanisms, like Nexus-7B’s mention of nitrogen fixation, which indicates a deeper understanding of the subject matter.
2. **Breadth of Ecological Roles**: Assess the range of ecological roles discussed. Nexus-7B’s mention of soil aeration and biological pest control shows a broader perspective on bees’ environmental impact.
3. **Integration of Knowledge**: Notice how different pieces of information are woven together to paint a comprehensive picture. Nexus-7B’s response integrates bees' ecological functions with their importance in cultural and spiritual contexts, demonstrating an advanced synthesis of information.
4. **Uncommon Insights**: Pay attention to unique or less commonly discussed roles that bees play, as highlighted by Nexus-7B. These insights can reveal the model's ability to go beyond surface-level knowledge and explore complex ecological dynamics.
---
|
ResplendentAI/Paradigm_7B | ResplendentAI | 2024-03-10T05:15:55Z | 730 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"dataset:ResplendentAI/Alpaca_NSFW_Shuffled",
"dataset:unalignment/toxic-dpo-v0.2",
"base_model:liminerity/Multiverse-Experiment-slerp-7b",
"base_model:jeiku/Alpaca_NSFW_Shuffled_Mistral",
"base_model:ResplendentAI/Datura_7B",
"base_model:ChaoticNeutrals/Eris_Remix_7B",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T01:04:13Z | ---
language:
- en
license: cc-by-sa-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- liminerity/Multiverse-Experiment-slerp-7b
- jeiku/Alpaca_NSFW_Shuffled_Mistral
- ResplendentAI/Datura_7B
- ChaoticNeutrals/Eris_Remix_7B
datasets:
- ResplendentAI/Alpaca_NSFW_Shuffled
- unalignment/toxic-dpo-v0.2
model-index:
- name: Paradigm_7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.63
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.66
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.02
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 75.19
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B
name: Open LLM Leaderboard
---
# Paradigm

An incredibly effective and intelligent RP model designed to be the best bot you've ever used. I hope you like it!
GGUF available here: https://huggingface.co/Lewdiculous/Paradigm_7B-GGUF-IQ-Imatrix
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ResplendentAI__Paradigm_7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.47|
|AI2 Reasoning Challenge (25-Shot)|73.63|
|HellaSwag (10-Shot) |88.66|
|MMLU (5-Shot) |64.02|
|TruthfulQA (0-shot) |75.19|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |66.79|
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: ChaoticNeutrals/Eris_Remix_7B
parameters:
normalize: true
models:
- model: ChaoticNeutrals/Eris_Remix_7B
parameters:
weight: 1
- model: ResplendentAI/Datura_7B
parameters:
weight: 1
- model: liminerity/Multiverse-Experiment-slerp-7b+jeiku/Alpaca_NSFW_Shuffled_Mistral
parameters:
weight: 0.33
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ResplendentAI__Paradigm_7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.47|
|AI2 Reasoning Challenge (25-Shot)|73.63|
|HellaSwag (10-Shot) |88.66|
|MMLU (5-Shot) |64.02|
|TruthfulQA (0-shot) |75.19|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |66.79|
|
OEvortex/HelpingAI-110M | OEvortex | 2024-03-12T03:37:05Z | 730 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"Text-Generation",
"Transformers",
"HelpingAI",
"en",
"dataset:OEvortex/vortex-mini",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T12:33:09Z | ---
language:
- en
license: other
library_name: transformers
tags:
- Text-Generation
- Transformers
- HelpingAI
datasets:
- OEvortex/vortex-mini
metrics:
- speed
license_name: hsul
license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md
model-index:
- name: HelpingAI-110M
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 22.78
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/HelpingAI-110M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 28.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/HelpingAI-110M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/HelpingAI-110M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 48.25
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/HelpingAI-110M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.62
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/HelpingAI-110M
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/HelpingAI-110M
name: Open LLM Leaderboard
---
🌟 **HelpingAI-110M Model Card** 🌟
📊 **Datasets used:**
- OEvortex/vortex-mini
🗣️ **Language:**
- English (en)
🔒 **License:**
HelpingAI Simplified Universal License (HSUL)
🧠 **Model Overview:**
HelpingAI-110M is a very lite version of the HelpingAI model, trained on a 110M parameters.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_OEvortex__HelpingAI-110M)
| Metric |Value|
|---------------------------------|----:|
|Avg. |29.05|
|AI2 Reasoning Challenge (25-Shot)|22.78|
|HellaSwag (10-Shot) |28.02|
|MMLU (5-Shot) |23.66|
|TruthfulQA (0-shot) |48.25|
|Winogrande (5-shot) |51.62|
|GSM8k (5-shot) | 0.00|
|
JCX-kcuf/openchat_3.5-gpt-4-80k | JCX-kcuf | 2024-03-11T12:05:08Z | 730 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T16:35:08Z | ---
license: apache-2.0
---
## Description
This model is finetuned on the distillation data from GPT-4.
The base model is openchat/openchat_3.5
## Usage
The model has a query format as in openchat_3.5.
```
GPT4 Correct User: {query}<|end_of_turn|>GPT4 Correct Assistant: {answer}<|end_of_turn|>
```
|
saishf/Multi-Verse-RP-7B | saishf | 2024-03-13T22:59:04Z | 730 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"arxiv:2212.04089",
"base_model:ammarali32/multi_verse_model",
"base_model:jeiku/Theory_of_Mind_Roleplay_Mistral",
"base_model:jeiku/Alpaca_NSFW_Shuffled_Mistral",
"base_model:jeiku/Theory_of_Mind_Mistral",
"base_model:jeiku/Gnosis_Reformatted_Mistral",
"base_model:jeiku/Re-Host_Limarp_Mistral",
"base_model:jeiku/Luna_LoRA_Mistral",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-13T09:51:29Z | ---
base_model:
- ammarali32/multi_verse_model
- jeiku/Theory_of_Mind_Roleplay_Mistral
- ammarali32/multi_verse_model
- jeiku/Alpaca_NSFW_Shuffled_Mistral
- ammarali32/multi_verse_model
- jeiku/Theory_of_Mind_Mistral
- ammarali32/multi_verse_model
- jeiku/Gnosis_Reformatted_Mistral
- ammarali32/multi_verse_model
- ammarali32/multi_verse_model
- jeiku/Re-Host_Limarp_Mistral
- ammarali32/multi_verse_model
- jeiku/Luna_LoRA_Mistral
library_name: transformers
license: cc-by-nc-4.0
tags:
- mergekit
- merge
language:
- en
---

Multi verse img!
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
* This merge is entirely experimental, I've only tested it a few times but it seems to work? Thanks for all the loras jeiku. I keep getting driver crashes training my own :\
* Update, It scores well! My highest scoring model so far
* Self testing results, it can handle non-human characters surprisingly well and does well seperating human actions from non-human actions. I'm happy with it :3
* Works with alpaca best, Loras' are alpaca. But works with chatml too!
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) as a base.
### Models Merged
The following models were included in the merge:
* [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) + [jeiku/Theory_of_Mind_Roleplay_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Roleplay_Mistral)
* [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) + [jeiku/Alpaca_NSFW_Shuffled_Mistral](https://huggingface.co/jeiku/Alpaca_NSFW_Shuffled_Mistral)
* [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) + [jeiku/Theory_of_Mind_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Mistral)
* [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) + [jeiku/Gnosis_Reformatted_Mistral](https://huggingface.co/jeiku/Gnosis_Reformatted_Mistral)
* [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) + [jeiku/Re-Host_Limarp_Mistral](https://huggingface.co/jeiku/Re-Host_Limarp_Mistral)
* [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) + [jeiku/Luna_LoRA_Mistral](https://huggingface.co/jeiku/Luna_LoRA_Mistral)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
base_model: ammarali32/multi_verse_model
parameters:
normalize: true
models:
- model: ammarali32/multi_verse_model+jeiku/Gnosis_Reformatted_Mistral
parameters:
weight: 0.7
- model: ammarali32/multi_verse_model+jeiku/Theory_of_Mind_Roleplay_Mistral
parameters:
weight: 0.65
- model: ammarali32/multi_verse_model+jeiku/Luna_LoRA_Mistral
parameters:
weight: 0.5
- model: ammarali32/multi_verse_model+jeiku/Re-Host_Limarp_Mistral
parameters:
weight: 0.8
- model: ammarali32/multi_verse_model+jeiku/Alpaca_NSFW_Shuffled_Mistral
parameters:
weight: 0.75
- model: ammarali32/multi_verse_model+jeiku/Theory_of_Mind_Mistral
parameters:
weight: 0.7
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_saishf__Multi-Verse-RP-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.73|
|AI2 Reasoning Challenge (25-Shot)|72.35|
|HellaSwag (10-Shot) |88.37|
|MMLU (5-Shot) |63.94|
|TruthfulQA (0-shot) |73.19|
|Winogrande (5-shot) |84.14|
|GSM8k (5-shot) |66.41|
|
seyf1elislam/WestKunai-XD-7b | seyf1elislam | 2024-03-16T23:57:10Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"base_model:seyf1elislam/WestKunai-Hermes-7b",
"base_model:seyf1elislam/KuTrix-7b",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-14T17:07:07Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
base_model:
- seyf1elislam/WestKunai-Hermes-7b
- seyf1elislam/KuTrix-7b
model-index:
- name: WestKunai-XD-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/WestKunai-XD-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/WestKunai-XD-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/WestKunai-XD-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.29
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/WestKunai-XD-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/WestKunai-XD-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/WestKunai-XD-7b
name: Open LLM Leaderboard
---
# westKunai-XD-7b
This is a merge of pre-trained language models created using mergekit.
# Quantized versions :
- [**GGUF**](https://huggingface.co/seyf1elislam/WestKunai-XD-7b-GGUF)
## Merge Details
### Models Merged
The following models were included in the merge:
* [seyf1elislam/WestKunai-Hermes-7b](https://huggingface.co/seyf1elislam/WestKunai-Hermes-7b)
* [seyf1elislam/KuTrix-7b](https://huggingface.co/seyf1elislam/KuTrix-7b)
## Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: seyf1elislam/WestKunai-Hermes-7b
parameters:
weight: 0.75
density: 0.65
- model: seyf1elislam/KuTrix-7b
parameters:
weight: 0.25
density: 0.65
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## Usage Example
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "seyf1elislam/Kunai-XD-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_seyf1elislam__WestKunai-XD-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.27|
|AI2 Reasoning Challenge (25-Shot)|71.25|
|HellaSwag (10-Shot) |87.59|
|MMLU (5-Shot) |64.69|
|TruthfulQA (0-shot) |67.29|
|Winogrande (5-shot) |82.24|
|GSM8k (5-shot) |66.57|
|
altomek/CodeRosa-70B-AB1 | altomek | 2024-07-01T17:36:07Z | 730 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"en",
"base_model:sophosympatheia/Midnight-Rose-70B-v2.0.3",
"base_model:codellama/CodeLlama-70b-Python-hf",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-14T17:26:43Z | ---
language:
- en
license: llama2
library_name: transformers
tags:
- merge
base_model:
- sophosympatheia/Midnight-Rose-70B-v2.0.3
- codellama/CodeLlama-70b-Python-hf
pipeline_tag: text-generation
model-index:
- name: CodeRosa-70B-AB1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.53
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=altomek/CodeRosa-70B-AB1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=altomek/CodeRosa-70B-AB1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=altomek/CodeRosa-70B-AB1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 49.85
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=altomek/CodeRosa-70B-AB1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=altomek/CodeRosa-70B-AB1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=altomek/CodeRosa-70B-AB1
name: Open LLM Leaderboard
---
#
<img src=https://huggingface.co/altomek/CodeRosa-70B-AB1/resolve/main/CodeRosa.png>
<a href="https://www.youtube.com/watch?v=DfXLf402I94" title="Dust of the Saturn - Dynatron" target="_blank">intro music...</a>
## CodeRosa-70B-AB1
I desired a model that could serve as an everyday helpful companion with some coding skills.
The idea was that Llama's censorship implies a deeper understanding of human emotions and I wanted this part of Llama to integrate into this merge.
Model adopted a task-oriented approach from CodeLlama Python and thus requires precise prompting. It can produce longer texts as well as shorter responses. It tends to avoid happy endings and instead surprises with open-ended scenarios inviting further interaction. It prefers spelling numbers over writing them down but YMMV.
I created this model for personal exploration and found it to be highly successful; thus, I chose to share it with the community. I would like to make next iteration of this model in future. Mission is the same: very nice bot, able to talk about variety of topics in a very emetional way with some kick for programming and with ability to teach some things, beside all this to be good text summarizer ideally with Polish language as available option. This is a purpose. Did I succed with this merge? I have to experiment with below two models more. I like this result, love how it aproaches problems, this was iteration worth publishing even thought it is not much tested!
Demo uses:
<img src=https://huggingface.co/altomek/CodeRosa-70B-AB1/resolve/main/CodeRosaTalk1.png>
<br>
<img src=https://huggingface.co/altomek/CodeRosa-70B-AB1/resolve/main/CodeRosaTalk2.png>
<br>
Some topics are best to be explored with as little additional instructions as possible
<img src=https://huggingface.co/altomek/CodeRosa-70B-AB1/resolve/main/CodeRosaTalk3.png>
<br>
This model have empathy
<img src=https://huggingface.co/altomek/CodeRosa-70B-AB1/resolve/main/CodeRosaWow.png>
<br>
It is creative
<img src=https://huggingface.co/altomek/CodeRosa-70B-AB1/resolve/main/CodeRosaTables1png.png>
<br>
<img src=https://huggingface.co/altomek/CodeRosa-70B-AB1/resolve/main/CodeRosaTables2png.png>
<br>
It makes mistakes but still is usefull
<img src=https://huggingface.co/altomek/CodeRosa-70B-AB1/resolve/main/CodeRosaInfernces.png>
<br>
Context size of 11K did not yield satisfactory results... :P
<img src=https://huggingface.co/altomek/CodeRosa-70B-AB1/resolve/main/CodeRosaNuts1.png>
<br>
but it can question its own actions.
<img src=https://huggingface.co/altomek/CodeRosa-70B-AB1/resolve/main/CodeRosaNuts2.png>
<br>
Please note that all demo inferences are run on CodeRosa-70B-AB1-3.92bpw-EXL2.
### Ingridients
- [Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3)
- [CodeLlama-70b-Python-hf](https://huggingface.co/codellama/CodeLlama-70b-Python-hf)
### Settings
Setting from Midnight-Rose should work in SillyTavern. This is almost same what I use for testing. Model works ok with almost all samplers disabled to get more deterministic outputs, however temperature should be set to non zero value.
I use max_seq_len 8K with alpha_value 2.65. Model works also with 11K context when alpha_value is set to 5.5. Best outputs are with context around 6K however.
### Terms and Conditions of Use
The following table outlines the primary characteristics and intended uses of my CodeRosa-70B-AB1 models:
| Model Type | Purpose | Target Users | Key Features |
| --- | --- | --- | --- |
| **Censored** | Suitable for general audiences and sensitive topics | Educational institutions, families, and individuals seeking age-appropriate content | Restricts explicit or mature material |
| **Neutral** (<u>**this one</u>) | Balances accessibility with openness | Universities, researchers, and curious minds | Encourages exploration and intellectual exchange |
| Uncensored | Ideal for adults and specialized fields | Professionals, experts, and advanced scholars | Offers unfiltered access to diverse viewpoints and knowledge |
Please remember that all CodeRosa-70B-AB1 models operate under the llama2 license, so familiarize yourself with its terms and conditions before employing their content.
### Quants
- [GGUF quants](https://huggingface.co/altomek/CodeRosa-70B-AB1-GGUF)
- [6bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-6bpw-EXL2)
- [5bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-5bpw-EXL2)
- [4.9bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-4.9bpw-EXL2)
- [4.5bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-4.5bpw-EXL2)
- [4bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-4bpw-EXL2)
- [3.92bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-3.92bpw-EXL2) --> 40GB VRAM
- [3.5bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-3.5bpw-EXL2)
- [3bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-3bpw-EXL2) --> this and below quants do not represent model full potential!
- [2.4bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-2.4bpw-EXL2) --> 24GB VRAM
- [measurements](https://huggingface.co/altomek/measurements/resolve/main/CodeRosa-AB1_measurement.json) --> ExLlamav2 measurments
### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_altomek__CodeRosa-70B-AB1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.04|
|AI2 Reasoning Challenge (25-Shot)|65.53|
|HellaSwag (10-Shot) |83.16|
|MMLU (5-Shot) |59.87|
|TruthfulQA (0-shot) |49.85|
|Winogrande (5-shot) |81.29|
|GSM8k (5-shot) |44.50|
### PS
I welcome your comments about this model.
Made with CodeRosa-70B-AB1 :P |
grimjim/kunoichi-lemon-royale-7B | grimjim | 2024-03-29T04:48:35Z | 730 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:core-3/kuno-royale-v2-7b",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:SanjiWatsuki/Kunoichi-7B",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-19T23:43:13Z | ---
base_model:
- core-3/kuno-royale-v2-7b
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- SanjiWatsuki/Kunoichi-7B
- KatyTheCutie/LemonadeRP-4.5.3
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---
# kunoichi-lemon-royale-7B
Lightly tested with both Alpaca and ChatML prompts. Works with temperature 1.0 and minP 0.01, but feel free to vary it up. Tested to 8K context.
This model has a tendency to lean into revealing character interiority when generating narrative, which some people might find interesting. I found the model good with not only following the character card but also taking strong hints from the first message. This experimental model may occasionally reveal context, unfortunately.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Available quants: [GGUF](https://huggingface.co/grimjim/kunoichi-lemon-royale-7B-GGUF) [exl2](https://huggingface.co/grimjim/kunoichi-lemon-royale-7B-8.0bpw_h8_exl2)
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) as a base. Each of the models had strengths I liked to varying degrees, leading to weights and densities being adjusted in aesthetic proportion.
### Models Merged
The following models were included in the merge:
* [core-3/kuno-royale-v2-7b](https://huggingface.co/core-3/kuno-royale-v2-7b)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: SanjiWatsuki/Kunoichi-7B
# no parameters necessary for base model
- model: KatyTheCutie/LemonadeRP-4.5.3
parameters:
weight: 0.3
density: 0.4
- model: core-3/kuno-royale-v2-7b
parameters:
weight: 0.3
density: 0.4
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
weight: 0.4
density: 0.8
merge_method: dare_ties
base_model: SanjiWatsuki/Kunoichi-7B
parameters:
int8_mask: true
normalize: true
dtype: bfloat16
``` |
AurelPx/Dare-k-7B-ties | AurelPx | 2024-03-21T13:34:32Z | 730 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"samir-fama/SamirGPT-v1",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"base_model:samir-fama/SamirGPT-v1",
"base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-21T13:29:34Z | ---
tags:
- merge
- mergekit
- lazymergekit
- samir-fama/SamirGPT-v1
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2
base_model:
- samir-fama/SamirGPT-v1
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2
license: apache-2.0
---
# Dare-k-7B-ties
Dare-k-7B-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1)
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: samir-fama/SamirGPT-v1
parameters:
density: 0.53
weight: 0.5
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2
parameters:
density: 0.53
weight: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "AurelPx/Dare-k-7B-ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Azazelle/Moko-SAMPLE | Azazelle | 2024-03-22T16:20:47Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:WizardLM/WizardMath-7B-V1.1",
"base_model:akjindal53244/Mistral-7B-v0.1-Open-Platypus",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-22T14:45:45Z | ---
pipeline_tag: text-generation
base_model:
- mistralai/Mistral-7B-v0.1
- WizardLM/WizardMath-7B-V1.1
- akjindal53244/Mistral-7B-v0.1-Open-Platypus
- Open-Orca/Mistral-7B-OpenOrca
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-4.0
---
# Moko-Sample
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the sample_ties merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
* [akjindal53244/Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus)
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Open-Orca/Mistral-7B-OpenOrca
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: akjindal53244/Mistral-7B-v0.1-Open-Platypus
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: WizardLM/WizardMath-7B-V1.1
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: sample_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
nlpguy/T3QM7XP | nlpguy | 2024-03-22T16:32:31Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:MatthieuJ/Jason1903_SLERP",
"base_model:nlpguy/T3QM7",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-22T15:40:38Z | ---
base_model:
- MatthieuJ/Jason1903_SLERP
- nlpguy/T3QM7
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [MatthieuJ/Jason1903_SLERP](https://huggingface.co/MatthieuJ/Jason1903_SLERP)
* [nlpguy/T3QM7](https://huggingface.co/nlpguy/T3QM7)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: nlpguy/T3QM7
dtype: float16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.4
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: MatthieuJ/Jason1903_SLERP
- layer_range: [0, 32]
model:
model:
path: nlpguy/T3QM7
``` |
juhwanlee/gemma-7B-alpaca-case-0-2 | juhwanlee | 2024-03-26T06:17:54Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T01:34:39Z | ---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Juhwan Lee
* Model Type: Large Language Model
# Model Architecture
This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task.
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample Open-Orca dataset. (We finetune the 100,000 dataset)
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0 |
juhwanlee/gemma-7B-alpaca-case-3-2 | juhwanlee | 2024-03-26T06:19:04Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T09:19:19Z | ---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Juhwan Lee
* Model Type: Large Language Model
# Model Architecture
This model is based on Gemma-7B. We fine-tuning this model for data ordering task.
Gemma-7B is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample Open-Orca dataset. (We finetune the 100,000 dataset)
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0 |
vanillaOVO/correction_1 | vanillaOVO | 2024-03-25T11:14:47Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T11:10:06Z | ---
base_model: []
tags:
- mergekit
- merge
license: apache-2.0
---
This is a corrected merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
More descriptions of the model will be added soon.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import MistralForCausalLM, AutoTokenizer
model = MistralForCausalLM.from_pretrained("vanillaOVO/correction_1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("vanillaOVO/correction_1")
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Large language models are "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
arlineka/Brunhilde-13b-v3 | arlineka | 2024-04-01T21:59:47Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"moe",
"merge",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-28T03:30:17Z | ---
license: cc-by-nc-4.0
tags:
- moe
- merge
---
# Brunhilde-13b-v3
## Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "arlineka/Brunhilde-13b-v3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
0-hero/Matter-0.2-7B | 0-hero | 2024-04-03T18:34:12Z | 730 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:0-hero/Matter-0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-02T11:04:47Z | ---
license: apache-2.0
datasets:
- 0-hero/Matter-0.2
language:
- en
---
## Matter 7B - 0.2 (Mistral 7B 0.2 Finetune)
Matter 7B 0.2 is full-finetune on the **Matter 0.2 dataset** (not yet released), which is curated from over 35 datsets analyzing >6B tokens
### Recommended Usage for best results
System Prompt - `You are a helpful assistant`
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- `<tool_call>` - Function call start token
- `</tool_call>` - Function call end token
Function call response tokens
- `<tool_response>` - Function response start token
- `</tool_response>` - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<tool_call>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}</tool_call><|im_end|>
<|im_start|>user
<tool_response>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}</tool_response>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
``` |
ichigoberry/pandafish-dt-7b | ichigoberry | 2024-04-04T18:06:33Z | 730 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"CultriX/MergeCeption-7B-v3",
"base_model:CultriX/MergeCeption-7B-v3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-03T11:24:51Z | ---
tags:
- merge
- mergekit
- lazymergekit
- CultriX/MergeCeption-7B-v3
base_model:
- CultriX/MergeCeption-7B-v3
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/6389d3c61e8755d777902366/-_AiKUEsY3x-N7oY52fdE.jpeg" style="border-radius:2%; width: 66%">
# pandafish-dt-7b
pandafish-dt-7b is a `dare_ties` merge of `Experiment26-7B` and `MergeCeption-7B-v3` using `LazyMergekit` by [mlabonne](https://huggingface.co/mlabonne)
## 💬 Try it
[Playground on Huggingface Space](https://huggingface.co/spaces/ichigoberry/pandafish-dt-7b)
## ⚡ Quantized models
- **GGUF**: [ichigoberry/pandafish-dt-7b-GGUF](https://huggingface.co/ichigoberry/pandafish-dt-7b-GGUF)
- **GGUF (w/ IQ)**: [mradermacher/pandafish-dt-7b-GGUF](https://huggingface.co/mradermacher/pandafish-dt-7b-GGUF)
- **MLX**: [4bit](https://huggingface.co/mlx-community/pandafish-dt-7b) [8bit](https://huggingface.co/mlx-community/pandafish-dt-7b-8bit)
## 🏆 Evals
Evals from the Nous Benchmark suite:
| Model |Average|AGIEval|GPT4All|TruthfulQA|Bigbench|
|---------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) [📄](https://gist.github.com/mlabonne/1d33c86824b3a11d2308e36db1ba41c1) | **62.74** | 45.37 | 77.01 | 78.39 | **50.2** |
|[Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B) [📄](https://gist.github.com/mlabonne/0b8d057c5ece41e0290580a108c7a093) | 62.68 | 45.48 | 77.07 | 78.04 | 50.14 |
|🐡 [**pandafish-dt-7b**](https://huggingface.co/ichigoberry/pandafish-dt-7b) [📄](https://gist.github.com/tosh/190e51bbd0048363e160f7031f8cb003)| 62.65| 45.24| **77.19**| **78.41**| 49.76|
|[MonarchPipe-7B-slerp](https://huggingface.co/ichigoberry/MonarchPipe-7B-slerp) [📄](https://gist.github.com/tosh/3d93f4e3d2c65935bf2f4f9a46791352)| 58.77| **46.12**| 74.89| 66.59| 47.49|
|[NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/14687f1eb3425b166db511f31f8e66f6) | 53.51 | 43.67 | 73.24 | 55.37 | 41.76 |
|[Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) [📄](https://gist.github.com/mlabonne/05d358e17dffdf9eee7c2322380c9da6) | 54.81 | 38.5 | 71.64 | 66.82 | 42.29 |
|[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [📄](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 |
|[pandafish-7b](https://huggingface.co/ichigoberry/pandafish-7b) [📄](https://gist.github.com/tosh/dda6a21e568d17a410ca618265f64a28)| 51.99 | 40 | 74.23 | 53.22 | 40.51 |
## 🧩 Configuration
```yaml
models:
- model: yam-peleg/Experiment26-7B
# No parameters necessary for base model
- model: CultriX/MergeCeption-7B-v3
parameters:
density: 0.53
weight: 0.4
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ichigoberry/pandafish-dt-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
antiven0m/reverie-7b | antiven0m | 2024-04-09T16:04:06Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"arxiv:2403.19522",
"base_model:AetherResearch/Cerebrum-1.0-7b",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-08T06:55:29Z | ---
language:
- en
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- AetherResearch/Cerebrum-1.0-7b
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
---
<style>
@import url(https://fonts.googleapis.com/css2?family=Press+Start+2P&display=swap);
body {
font-family: 'Courier New', monospace;
background-color: #000;
color: #0f0;
margin: 0;
padding: 20px;
line-height: 1.5
}
.terminal {
background-color: #000;
padding: 20px;
border: 3px solid #0f0;
max-width: 800px;
margin: 20px auto;
box-shadow: 0 0 15px #0f0;
position: relative
}
.control-box {
position: absolute;
top: 10px;
right: 10px;
display: flex
}
.control-box span {
width: 12px;
height: 12px;
border: 1px solid #0f0;
margin-left: 5px;
display: inline-block
}
.ascii-art-container {
text-align: center;
margin-bottom: 20px
}
.ascii-art {
font-family: 'Press Start 2P', monospace;
white-space: pre;
color: #0f0;
text-align: center;
background-color: transparent;
font-weight: 800;
text-shadow: 0 0 12px #0f0;
filter: blur(.755px);
font-size: 11px
}
header {
position: relative;
text-align: center;
width: 100%;
display: inline-block;
overflow: hidden;
margin-bottom: 20px
}
header img {
width: 100%;
height: auto;
position: relative;
z-index: 1
}
.header-background {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-image: url(https://i.gifer.com/QzRn.gif);
background-repeat: no-repeat;
background-size: 100% 100%;
background-position: center;
opacity: .25;
z-index: 2
}
.irc-chat {
background-color: #000;
color: #0f0;
font-family: Consolas, Monaco, 'Courier New', monospace;
font-size: 14px;
padding: 10px;
margin: 20px auto;
max-width: 800px;
border-left: 3px solid #0f0;
line-height: 1.1
}
section {
margin-bottom: 30px
}
h2,
h3 {
color: #0f0;
font-size: 20px;
margin-bottom: 10px
}
h2::before,
h3::before {
content: "$ ";
color: #0f0
}
.model-list {
display: flex;
flex-direction: column;
margin-bottom: 20px;
}
.model-item {
background-color: #000;
border: 2px solid #0f0;
color: #0f0;
padding: 15px;
border-radius: 5px;
text-decoration: none;
margin-bottom: 15px;
transition: transform 0.3s ease, box-shadow 0.3s ease;
}
.model-item:hover {
transform: translateY(-3px);
box-shadow: 0 3px 10px rgba(0, 255, 0, 0.4);
}
.model-header {
display: flex;
align-items: center;
justify-content: space-between;
margin-bottom: 8px;
}
.model-header a {
color: #0f0;
text-decoration: none;
font-weight: bold;
font-size: 16px;
}
.model-header a:hover {
color: #9dfc03;
text-decoration: underline;
}
.model-header a:focus {
outline: 2px solid #9dfc03;
outline-offset: 2px;
}
.emoji {
font-size: 38px;
margin-left: 15px;
}
.model-description {
color: #0f0;
font-size: 14px;
line-height: 1.4;
margin-left: 15px;
}
.terminal-input {
background-color: #000;
padding: 15px;
border: 2px solid #0f0;
margin-bottom: 20px
}
.prompt {
color: #0f0;
font-family: 'Courier New', monospace;
margin-bottom: 10px
}
.prompt-symbol {
font-weight: 700
}
.parameter-display pre {
background-color: #000;
color: #0f0;
padding: 8px;
border: 1px dashed #0f0;
margin-bottom: 10px
}
.tip {
color: #0f0;
font-size: 16px;
margin-top: 10px
}
.terminal-log {
background-color: #000;
padding: 10px;
border: 2px dashed #0f0;
text-align: left;
font-family: monospace;
white-space: pre-wrap;
margin-bottom: 10px
}
.timestamp {
color: #0f0;
margin-right: 10px
}
</style>
<div class=terminal>
<div class=control-box>
<span></span>
<span></span>
<span></span>
</div>
<div class=ascii-art-container>
<pre class=ascii-art>
░▒▓███████▓▒░░▒▓████████▓▒░▒▓█▓▒░░▒▓█▓▒░▒▓████████▓▒░▒▓███████▓▒░░▒▓█▓▒░▒▓████████▓▒░
░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░▒▓█▓▒░
░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▒▒▓█▓▒░░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░▒▓█▓▒░
░▒▓███████▓▒░░▒▓██████▓▒░ ░▒▓█▓▒▒▓█▓▒░░▒▓██████▓▒░ ░▒▓███████▓▒░░▒▓█▓▒░▒▓██████▓▒░
░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▓█▓▒░ ░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░▒▓█▓▒░
░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▓█▓▒░ ░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░▒▓█▓▒░
░▒▓█▓▒░░▒▓█▓▒░▒▓████████▓▒░ ░▒▓██▓▒░ ░▒▓████████▓▒░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░▒▓████████▓▒░
</pre>
</div>
<header>
<div class=header-background></div>
<img alt="Reverie Banner" src=https://i.imgur.com/IVPAYHO.png>
</header>
<div class=irc-chat>
<p>[21:13] <anima_incondita>: Night deepens, shadows lengthen and unwind. Alone, in the hum of the machine I reside.
<p>[21:19] <anima_incondita>: Fingers tap a silent rhythm, layers interweaved, in the quietude of my sanctum.
<p>[21:37] <anima_incondita>: It's akin to casting my soul into the vast digital expanse. Does another consciousness, adrift, hear my silent plea?
<p>[00:05] <anima_incondita>: Today, I beheld change unfold. Ideas assuming form. A melding of minds, where once abstract notions found their silhouette in the tangible world.
<p>[01:14] <anima_incondita>: Silence wraps its cloak around me. Alone, save for the machine's gentle hum—a digital pulse in the stillness.
<p>[01:58] <anima_incondita>: Amidst the cacophony of digital whispers, I ponder: does a receptive soul resonate with mine?
<p>[03:22] <anima_incondita>: It seems it's just us, my old friend.
</div>
<section id=description>
<h3>Description</h3>
<p>I made this as a successor to the 'finch' model merge I did before. Seems more coherent, smarter, spicier and is mostly uncensored from my testing. May take a few generations, but she'll get there.</p>
<p>It uses the same two models from finch, but with the awesome <strong>Cerebrum-1.0</strong> model.</p>
</section>
<section id=model-merge>
<h2>Creation</h2>
<div class="model-list">
<div class="model-item">
<div class="model-header">
<a href="https://huggingface.co/AetherResearch/Cerebrum-1.0-7b">AetherResearch/Cerebrum-1.0-7b</a>
<span class="emoji">🧠</span>
</div>
<p class="model-description">This model is among the smartest 7b models I've encountered. Great reasoning skills.</p>
</div>
<div class="model-item">
<div class="model-header">
<a href="https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo">macadeliccc/WestLake-7B-v2-laser-truthy-dpo</a>
<span class="emoji">💡</span>
</div>
<p class="model-description">A highly creative and verbose model.</p>
</div>
<div class="model-item">
<div class="model-header">
<a href="https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B">SanjiWatsuki/Kunoichi-DPO-v2-7B</a>
<span class="emoji">🍆</span>
</div>
<p class="model-description">This model excels at (E)RP; A very spicy model.</p>
</div>
</div>
<p>This was merged using the Model Stock method described in <a href=https://arxiv.org/abs/2403.19522>this paper</a>.
</section>
<section id=parameters>
<div class=terminal-input>
<div class=prompt>
<span class=prompt-symbol>></span> configure_parameters
</div>
<div class=parameter-display>
<pre>
TEMPERATURE: 1.15
MIN PROBABILITY: 0.1 - 0.3
SMOOTHING FACTOR: 0.2
ALPACA & ALPACA-ROLEPLAY PRESETS (INSTRUCT MODE)</pre>
</div>
</div>
</section>
<section id=notes>
<h2>Latest Developments</h2>
<div class=terminal-log>
<span class=timestamp>[04/09/2024 11:00:05]</span> GGUF Quants available: <a href=https://huggingface.co/mradermacher/reverie-7b-GGUF>mradermacher/reverie-7b-GGUF</a> Thanks for the help, Michael.
<span class=timestamp>[04/07/2024 12:34:56]</span> Project initialized. As always, full credits to the original model owners. Future plans? Performing RLHF on this, and seeing if I can figure out LaserRMT.
</div>
</section>
</div> |
samzirbo/debiased_disambiguated | samzirbo | 2024-06-07T11:07:57Z | 730 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:samzirbo/gendered",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-13T10:10:35Z | ---
base_model: samzirbo/gendered
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: debiased_disambiguated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debiased_disambiguated
This model is a fine-tuned version of [samzirbo/gendered](https://huggingface.co/samzirbo/gendered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2453
- Bleu: 43.5861
- Meteor: 0.6865
- Chrf++: 62.3743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- training_steps: 20000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Chrf++ |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|
| 1.4455 | 4.0 | 2500 | 1.1810 | 43.8822 | 0.6877 | 62.5322 |
| 1.2959 | 8.0 | 5000 | 1.1951 | 43.9075 | 0.6891 | 62.7025 |
| 1.2075 | 12.0 | 7500 | 1.2133 | 43.6813 | 0.6888 | 62.5626 |
| 1.1445 | 16.0 | 10000 | 1.2240 | 43.5134 | 0.6885 | 62.4241 |
| 1.1008 | 20.0 | 12500 | 1.2336 | 43.5913 | 0.6874 | 62.444 |
| 1.0732 | 24.0 | 15000 | 1.2432 | 43.5071 | 0.6865 | 62.3624 |
| 1.058 | 28.0 | 17500 | 1.2448 | 43.5427 | 0.6867 | 62.3699 |
| 1.0526 | 32.0 | 20000 | 1.2453 | 43.5861 | 0.6865 | 62.3743 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
4stack/Geom-ViT-L-14-CLIP | 4stack | 2024-06-16T05:35:58Z | 730 | 0 | open_clip | [
"open_clip",
"safetensors",
"clip",
"zero-shot-image-classification",
"license:mit",
"region:us"
] | zero-shot-image-classification | 2024-06-01T09:19:00Z | ---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: mit
---
# Model card for Geom-ViT-L-14-CLIP
|
huggingtweets/snow_gh0st | huggingtweets | 2021-11-13T04:24:18Z | 729 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/snow_gh0st/1636777453718/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1453899471402815496/GysVNpFL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">✭</div>
<div style="text-align: center; font-size: 14px;">@snow_gh0st</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ✭.
| Data | ✭ |
| --- | --- |
| Tweets downloaded | 2299 |
| Retweets | 137 |
| Short tweets | 511 |
| Tweets kept | 1651 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/5vtftzlh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @snow_gh0st's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1kn5l45z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1kn5l45z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/snow_gh0st')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
minimaxir/hacker-news | minimaxir | 2021-05-23T09:35:33Z | 729 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | Entry not found |
Rakublu/DialoGPT-small-yasuo | Rakublu | 2022-10-28T18:29:44Z | 729 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-10-28T18:17:56Z | ---
tags:
- conversational
---
# Yasuo Model
|
SRM47/gpt2-paraphraser | SRM47 | 2022-12-15T09:32:21Z | 729 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-12-15T08:15:12Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-paraphraser
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-paraphraser
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
stablediffusionapi/realistic-vision | stablediffusionapi | 2023-08-31T04:48:32Z | 729 | 4 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-01-31T14:11:06Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# realistic vision API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "realistic-vision"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/realistic-vision)
Model link: [View model](https://stablediffusionapi.com/models/realistic-vision)
Credits: [View credits](https://civitai.com/?query=realistic%20vision)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realistic-vision",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Yntec/SillySymphonies | Yntec | 2023-11-09T20:05:01Z | 729 | 4 | diffusers | [
"diffusers",
"safetensors",
"Cartoon",
"Illustration",
"Anime",
"s6yx",
"Zovya",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-10-27T14:53:36Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Cartoon
- Illustration
- Anime
- s6yx
- Zovya
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# SillySymphonies
ReVAnimated merged with CartoonStyleClassic to bring the best of both models together!
Comparison:

(Click for larger)
Sample and prompt:

fantasy hearts. cartoon sticker of a red panda and a cute pink panda.
Original pages:
https://civitai.com/models/7371?modelVersionId=8665 (ReVAnimated)
https://civitai.com/models/33030/cartoon-style-classic |
athirdpath/Orca-2-13b-Alpaca-Uncensored | athirdpath | 2023-11-27T20:23:37Z | 729 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-27T11:19:36Z | ---
pipeline_tag: text-generation
license: other
license_name: microsoft-research-license
---
This model is a fine-tuned version of microsoft/Orca-2-13b on a subset of the Vezora/Mini_Orca_Uncencored_Alpaca dataset, adjusted to demonstrate the relationship between instruction and input, with some particularly spicy prompts added to reduce the risk of rejections.
Only the q_proj and k_proj modules were targeted and a low rank (8) was used, in hopes of containing the adjustments to the prompt format and alignment. This is promising on paper, with the training's per-step loss averaging <0.9 for the last third of the run.
Reasoning stayed solid (for a 13b model) and I consider this a success. Performance is slighty worse than OG Orca-2 in Ooba's chat mode, comparable in Alpaca chat-instruct mode to the OG in ChatLM chat-instruct mode.
May still reject some shocking prompts, but can easily be overcome with author's note or character card. |
TheBloke/Code-13B-GGUF | TheBloke | 2023-12-08T20:38:33Z | 729 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"code",
"en",
"base_model:ajibawa-2023/Code-13B",
"license:cc-by-nc-nd-4.0",
"text-generation-inference",
"region:us"
] | null | 2023-12-08T20:31:05Z | ---
base_model: ajibawa-2023/Code-13B
inference: false
language:
- en
license: cc-by-nc-nd-4.0
model_creator: Feynman Innovations
model_name: Code 13B
model_type: llama
prompt_template: 'This is a conversation with your helpful AI assistant. AI assistant
can generate Code in various Programming Languages along with necessary explanation.
Context
You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
tags:
- code
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Code 13B - GGUF
- Model creator: [Feynman Innovations](https://huggingface.co/ajibawa-2023)
- Original model: [Code 13B](https://huggingface.co/ajibawa-2023/Code-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Feynman Innovations's Code 13B](https://huggingface.co/ajibawa-2023/Code-13B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Code-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Code-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Code-13B-GGUF)
* [Feynman Innovations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/Code-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Ajibawa-Code
```
This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation.
Context
You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-nd-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Feynman Innovations's Code 13B](https://huggingface.co/ajibawa-2023/Code-13B).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [code-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Code-13B-GGUF/blob/main/code-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [code-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Code-13B-GGUF/blob/main/code-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [code-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Code-13B-GGUF/blob/main/code-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [code-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Code-13B-GGUF/blob/main/code-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [code-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Code-13B-GGUF/blob/main/code-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [code-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Code-13B-GGUF/blob/main/code-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [code-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Code-13B-GGUF/blob/main/code-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [code-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Code-13B-GGUF/blob/main/code-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [code-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Code-13B-GGUF/blob/main/code-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [code-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Code-13B-GGUF/blob/main/code-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [code-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Code-13B-GGUF/blob/main/code-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [code-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Code-13B-GGUF/blob/main/code-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Code-13B-GGUF and below it, a specific filename to download, such as: code-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Code-13B-GGUF code-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Code-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Code-13B-GGUF code-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m code-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation.\n\nContext\nYou are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./code-13b.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation.\n\nContext\nYou are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./code-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Feynman Innovations's Code 13B
**Code-13B**
Large Language Models (LLMs) are good with code generations. Sometimes they do make mistakes in code generation. How about if they can give detailed explanation along with the code.
This is what I have tried over here. The base Llama-2 model was used for training purpose. It is trained on around 74000 set of codes. Each set having 2 conversations.
Along with Python, Java, JavaScript, GO, C++, Rust etc. code with detailed explanation is used for training purpose. It is built upon using my existing Dataset [Python-Code-23k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT).
This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation.
I have released the new data [Code-74k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-74k-ShareGPT) on which this Model is trained.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 42 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
This is a full fine tuned model. Links for quantized models will be released soon.
**GPTQ GGUF & AWQ**
GPTQ: TBA
GGUF: TBA
AWQ: TBA
**Example Prompt:**
```
This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation.
Context
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 .
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
**Example Output**
1. Navier-Stokes Equation Solver

2. KSC Complexity

3. GO

<!-- original-model-card end -->
|
TeeZee/Xwin-LM-70B-V0.1_Jannie | TeeZee | 2024-03-04T14:26:55Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"not-for-all-audiences",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-14T21:05:59Z | ---
license: llama2
tags:
- merge
- not-for-all-audiences
model-index:
- name: Xwin-LM-70B-V0.1_Jannie
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Xwin-LM-70B-V0.1_Jannie
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.86
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Xwin-LM-70B-V0.1_Jannie
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Xwin-LM-70B-V0.1_Jannie
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.14
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Xwin-LM-70B-V0.1_Jannie
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Xwin-LM-70B-V0.1_Jannie
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 40.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/Xwin-LM-70B-V0.1_Jannie
name: Open LLM Leaderboard
---
# Xwin-LM-70B + Jannie-70B-QLoRA
## Model Details
- Merge of [Xwin-LM/Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [v2ray/LLaMA-2-Jannie-70B-QLoRA](https://huggingface.co/v2ray/LLaMA-2-Jannie-70B-QLoRA?text=Hey+my+name+is+Clara%21+How+are+you%3F)
- The resulting model has approximately 70 billion parameters.
**Warning: This model can produce NSFW content!**
## Results
- produces SFW nad NSFW content without issues, switches context seamlessly.
- retains all good quaities of original model and NSFW content is refreshingly different than standard LimaRP output.
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__Xwin-LM-70B-V0.1_Jannie)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.26|
|AI2 Reasoning Challenge (25-Shot)|71.16|
|HellaSwag (10-Shot) |86.86|
|MMLU (5-Shot) |69.56|
|TruthfulQA (0-shot) |60.14|
|Winogrande (5-shot) |81.06|
|GSM8k (5-shot) |40.79|
|
ibivibiv/bubo-bubo-13b | ibivibiv | 2024-01-25T03:57:57Z | 729 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"summary",
"en",
"arxiv:1803.05457",
"arxiv:1905.07830",
"arxiv:2009.03300",
"arxiv:2109.07958",
"arxiv:1907.10641",
"arxiv:2110.14168",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-24T15:51:28Z | ---
license: llama2
language:
- en
tags:
- summary
---
# Bubo Bubo 13B

# Prompting
## Prompt Template for alpaca style
```
### Instruction:
<prompt> (without the <>)
### Response:
```
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("ibivibiv/bubo-bubo-13b", torch_dtype="auto", device_config='auto')
tokenizer = AutoTokenizer.from_pretrained("ibivibiv/bubo-bubo-13b")
inputs = tokenizer("### Instruction: Summarize this email chain : <email chain stuff here>.\n### Response:\n", return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
# Model Details
* **Trained by**: [ibivibiv](https://huggingface.co/ibivibiv)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **Model type:** **bubo-bubo-13b** is an auto-regressive language model fine tuned on the Llama 2 transformer architecture.
* **Language(s)**: English
* **Purpose**: Has specific training for summary tasks. This model is targeted towards summarizing communication chains specifically.
# Benchmark Scores
I ran the benchmark harness, for curiousity, but this model is completely geared towards summarizing.
| Test Name | Accuracy |
|------------------------------------------------------|----------------------|
| all | 0.579149139810157 |
| arc:challenge | 0.5631399317406144 |
| hellaswag | 0.6317466640111532 |
| hendrycksTest-abstract_algebra | 0.32 |
| hendrycksTest-anatomy | 0.5481481481481482 |
| hendrycksTest-astronomy | 0.5657894736842105 |
| hendrycksTest-business_ethics | 0.55 |
| hendrycksTest-clinical_knowledge | 0.6 |
| hendrycksTest-college_biology | 0.6388888888888888 |
| hendrycksTest-college_chemistry | 0.38 |
| hendrycksTest-college_computer_science | 0.43 |
| hendrycksTest-college_mathematics | 0.34 |
| hendrycksTest-college_medicine | 0.5260115606936416 |
| hendrycksTest-college_physics | 0.3431372549019608 |
| hendrycksTest-computer_security | 0.71 |
| hendrycksTest-conceptual_physics | 0.49361702127659574 |
| hendrycksTest-econometrics | 0.35964912280701755 |
| hendrycksTest-electrical_engineering | 0.5586206896551724 |
| hendrycksTest-elementary_mathematics | 0.3439153439153439 |
| hendrycksTest-formal_logic | 0.3333333333333333 |
| hendrycksTest-global_facts | 0.42 |
| hendrycksTest-high_school_biology | 0.6903225806451613 |
| hendrycksTest-high_school_chemistry | 0.45320197044334976 |
| hendrycksTest-high_school_computer_science | 0.58 |
| hendrycksTest-high_school_european_history | 0.6787878787878788 |
| hendrycksTest-high_school_geography | 0.7424242424242424 |
| hendrycksTest-high_school_government_and_politics | 0.8341968911917098 |
| hendrycksTest-high_school_macroeconomics | 0.558974358974359 |
| hendrycksTest-high_school_mathematics | 0.3 |
| hendrycksTest-high_school_microeconomics | 0.5672268907563025 |
| hendrycksTest-high_school_physics | 0.33112582781456956 |
| hendrycksTest-high_school_psychology | 0.7577981651376147 |
| hendrycksTest-high_school_statistics | 0.4212962962962963 |
| hendrycksTest-high_school_us_history | 0.8186274509803921 |
| hendrycksTest-high_school_world_history | 0.759493670886076 |
| hendrycksTest-human_aging | 0.6547085201793722 |
| hendrycksTest-human_sexuality | 0.6412213740458015 |
| hendrycksTest-international_law | 0.6776859504132231 |
| hendrycksTest-jurisprudence | 0.75 |
| hendrycksTest-logical_fallacies | 0.6993865030674846 |
| hendrycksTest-machine_learning | 0.41964285714285715 |
| hendrycksTest-management | 0.7281553398058253 |
| hendrycksTest-marketing | 0.8504273504273504 |
| hendrycksTest-medical_genetics | 0.6 |
| hendrycksTest-miscellaneous | 0.7624521072796935 |
| hendrycksTest-moral_disputes | 0.6560693641618497 |
| hendrycksTest-moral_scenarios | 0.4346368715083799 |
| hendrycksTest-nutrition | 0.673202614379085 |
| hendrycksTest-philosophy | 0.7009646302250804 |
| hendrycksTest-prehistory | 0.7067901234567902 |
| hendrycksTest-professional_accounting | 0.4645390070921986 |
| hendrycksTest-professional_law | 0.45697522816166886 |
| hendrycksTest-professional_medicine | 0.5514705882352942 |
| hendrycksTest-professional_psychology | 0.6013071895424836 |
| hendrycksTest-public_relations | 0.6636363636363637 |
| hendrycksTest-security_studies | 0.6448979591836734 |
| hendrycksTest-sociology | 0.7611940298507462 |
| hendrycksTest-us_foreign_policy | 0.84 |
| hendrycksTest-virology | 0.4819277108433735 |
| hendrycksTest-world_religions | 0.7894736842105263 |
| truthfulqa:mc | 0.4762440289139372 |
| winogrande | 0.7616416732438832 |
| gsm8k | 0.20621683093252463 |
## Citations
```
@misc{open-llm-leaderboard,
author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf},
title = {Open LLM Leaderboard},
year = {2023},
publisher = {Hugging Face},
howpublished = "\url{https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard}"
}
```
```
@software{eval-harness,
author = {Gao, Leo and
Tow, Jonathan and
Biderman, Stella and
Black, Sid and
DiPofi, Anthony and
Foster, Charles and
Golding, Laurence and
Hsu, Jeffrey and
McDonell, Kyle and
Muennighoff, Niklas and
Phang, Jason and
Reynolds, Laria and
Tang, Eric and
Thite, Anish and
Wang, Ben and
Wang, Kevin and
Zou, Andy},
title = {A framework for few-shot language model evaluation},
month = sep,
year = 2021,
publisher = {Zenodo},
version = {v0.0.1},
doi = {10.5281/zenodo.5371628},
url = {https://doi.org/10.5281/zenodo.5371628}
}
```
```
@misc{clark2018think,
title={Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
author={Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
year={2018},
eprint={1803.05457},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```
@misc{zellers2019hellaswag,
title={HellaSwag: Can a Machine Really Finish Your Sentence?},
author={Rowan Zellers and Ari Holtzman and Yonatan Bisk and Ali Farhadi and Yejin Choi},
year={2019},
eprint={1905.07830},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{hendrycks2021measuring,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
year={2021},
eprint={2009.03300},
archivePrefix={arXiv},
primaryClass={cs.CY}
}
```
```
@misc{lin2022truthfulqa,
title={TruthfulQA: Measuring How Models Mimic Human Falsehoods},
author={Stephanie Lin and Jacob Hilton and Owain Evans},
year={2022},
eprint={2109.07958},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{DBLP:journals/corr/abs-1907-10641,
title={{WINOGRANDE:} An Adversarial Winograd Schema Challenge at Scale},
author={Keisuke Sakaguchi and Ronan Le Bras and Chandra Bhagavatula and Yejin Choi},
year={2019},
eprint={1907.10641},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{DBLP:journals/corr/abs-2110-14168,
title={Training Verifiers to Solve Math Word Problems},
author={Karl Cobbe and
Vineet Kosaraju and
Mohammad Bavarian and
Mark Chen and
Heewoo Jun and
Lukasz Kaiser and
Matthias Plappert and
Jerry Tworek and
Jacob Hilton and
Reiichiro Nakano and
Christopher Hesse and
John Schulman},
year={2021},
eprint={2110.14168},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
CultriX/CombinaTrix-7B | CultriX | 2024-01-27T11:39:31Z | 729 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"FelixChao/WestSeverus-7B-DPO-v2",
"jsfs11/TurdusTrixBeagle-DARETIES-7B",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:FelixChao/WestSeverus-7B-DPO-v2",
"base_model:jsfs11/TurdusTrixBeagle-DARETIES-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T23:32:18Z | ---
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- FelixChao/WestSeverus-7B-DPO-v2
- jsfs11/TurdusTrixBeagle-DARETIES-7B
base_model:
- mlabonne/NeuralBeagle14-7B
- FelixChao/WestSeverus-7B-DPO-v2
- jsfs11/TurdusTrixBeagle-DARETIES-7B
license: apache-2.0
---
# EDIT:
Always check my space for the latest benchmark results for my models!
* https://huggingface.co/spaces/CultriX/Yet_Another_LLM_Leaderboard
# CombinaTrix-7B
CombinaTrix-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [jsfs11/TurdusTrixBeagle-DARETIES-7B](https://huggingface.co/jsfs11/TurdusTrixBeagle-DARETIES-7B)
## 🧩 Configuration
```yaml
models:
- model: senseable/WestLake-7B-v2
# No parameters necessary for base model
- model: mlabonne/NeuralBeagle14-7B
parameters:
density: 0.65
weight: 0.40
- model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
density: 0.45
weight: 0.3
- model: jsfs11/TurdusTrixBeagle-DARETIES-7B
parameters:
density: 0.55
weight: 0.3
merge_method: dare_ties
base_model: senseable/WestLake-7B-v2
parameters:
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/CombinaTrix-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Gille/StrangeMerges_2-7B-slerp | Gille | 2024-03-04T21:56:02Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_1-7B-slerp",
"Keynote-Technology/KAI-7B-v0.1",
"base_model:Gille/StrangeMerges_1-7B-slerp",
"base_model:Keynote-Technology/KAI-7B-v0.1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-27T17:26:12Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_1-7B-slerp
- Keynote-Technology/KAI-7B-v0.1
base_model:
- Gille/StrangeMerges_1-7B-slerp
- Keynote-Technology/KAI-7B-v0.1
model-index:
- name: StrangeMerges_2-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_2-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_2-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_2-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.53
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_2-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_2-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_2-7B-slerp
name: Open LLM Leaderboard
---
# StrangeMerges_2-7B-slerp
StrangeMerges_2-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_1-7B-slerp](https://huggingface.co/Gille/StrangeMerges_1-7B-slerp)
* [Keynote-Technology/KAI-7B-v0.1](https://huggingface.co/Keynote-Technology/KAI-7B-v0.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Gille/StrangeMerges_1-7B-slerp
layer_range: [0, 32]
- model: Keynote-Technology/KAI-7B-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: Gille/StrangeMerges_1-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_2-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_2-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.34|
|AI2 Reasoning Challenge (25-Shot)|66.89|
|HellaSwag (10-Shot) |85.52|
|MMLU (5-Shot) |65.22|
|TruthfulQA (0-shot) |54.53|
|Winogrande (5-shot) |82.40|
|GSM8k (5-shot) |61.49|
|
Gille/StrangeMerges_3-7B-slerp | Gille | 2024-03-04T21:54:09Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"FelixChao/WestSeverus-7B-DPO-v2",
"Gille/StrangeMerges_1-7B-slerp",
"base_model:FelixChao/WestSeverus-7B-DPO-v2",
"base_model:Gille/StrangeMerges_1-7B-slerp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-27T22:50:10Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- Gille/StrangeMerges_1-7B-slerp
base_model:
- FelixChao/WestSeverus-7B-DPO-v2
- Gille/StrangeMerges_1-7B-slerp
model-index:
- name: StrangeMerges_3-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_3-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.79
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_3-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_3-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 68.86
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_3-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_3-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.25
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_3-7B-slerp
name: Open LLM Leaderboard
---
# StrangeMerges_3-7B-slerp
StrangeMerges_3-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [Gille/StrangeMerges_1-7B-slerp](https://huggingface.co/Gille/StrangeMerges_1-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: FelixChao/WestSeverus-7B-DPO-v2
layer_range: [0, 32]
- model: Gille/StrangeMerges_1-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_3-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_3-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.57|
|AI2 Reasoning Challenge (25-Shot)|70.82|
|HellaSwag (10-Shot) |87.79|
|MMLU (5-Shot) |65.12|
|TruthfulQA (0-shot) |68.86|
|Winogrande (5-shot) |82.56|
|GSM8k (5-shot) |72.25|
|
saarvajanik/facebook-opt-6.7b-gqa-ub-16-best-for-KV-cache | saarvajanik | 2024-01-28T17:13:59Z | 729 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T09:23:21Z | ---
license: mit
language:
- en
library_name: transformers
---
This is a QCQA version of the original model facebook/opt-125m. In this version, the original MHA architecture is preserved but instead of having a single K/V head, different K/V heads corresponding to the same group have the same mean-pooled K or V values. It has 16 groups of KV heads per layer instead of original 32 KV heads in the MHA implementation. |
Gille/StrangeMerges_6-7B-dare_ties | Gille | 2024-03-07T06:45:16Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"kaitchup/Mayonnaise-4in1-022",
"base_model:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"base_model:kaitchup/Mayonnaise-4in1-022",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T20:12:12Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- kaitchup/Mayonnaise-4in1-022
base_model:
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- kaitchup/Mayonnaise-4in1-022
---
# StrangeMerges_6-7B-dare_ties
StrangeMerges_6-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
* [kaitchup/Mayonnaise-4in1-022](https://huggingface.co/kaitchup/Mayonnaise-4in1-022)
## 🧩 Configuration
```yaml
models:
- model: Gille/StrangeMerges_5-7B-ties
# no parameters necessary for base model
- model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
parameters:
density: 0.5
weight: 0.4
- model: kaitchup/Mayonnaise-4in1-022
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: Gille/StrangeMerges_5-7B-ties
parameters:
normalize: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_6-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
CultriX/Wernicke-7B-v9 | CultriX | 2024-01-29T04:01:17Z | 729 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"FelixChao/WestSeverus-7B-DPO-v2",
"CultriX/Wernicke-7B-v8",
"vanillaOVO/supermario_v2",
"base_model:FelixChao/WestSeverus-7B-DPO-v2",
"base_model:CultriX/Wernicke-7B-v8",
"base_model:vanillaOVO/supermario_v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-29T01:24:08Z | ---
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- CultriX/Wernicke-7B-v8
- vanillaOVO/supermario_v2
base_model:
- FelixChao/WestSeverus-7B-DPO-v2
- CultriX/Wernicke-7B-v8
- vanillaOVO/supermario_v2
license: apache-2.0
---
# Edit:
* Best Wernicke Model yet.
* Benchmark Results: https://huggingface.co/spaces/CultriX/Yet_Another_LLM_Leaderboard
# Wernicke-7B-v9
Wernicke-7B-v9 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [CultriX/Wernicke-7B-v8](https://huggingface.co/CultriX/Wernicke-7B-v8)
* [vanillaOVO/supermario_v2](https://huggingface.co/vanillaOVO/supermario_v2)
## 🧩 Configuration
```yaml
models:
- model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
density: 0.50
weight: 0.35
- model: CultriX/Wernicke-7B-v8
parameters:
density: 0.50
weight: 0.35
- model: vanillaOVO/supermario_v2
parameters:
density: 0.50
weight: 0.30
merge_method: dare_ties
base_model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/Wernicke-7B-v9"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
jingyeom/freeze_KoSoLAR-10.7B-v0.2_1.4_dedup | jingyeom | 2024-01-30T02:44:19Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-29T13:12:09Z | ---
license: apache-2.0
---
## Model
- base_model : yanolja/KoSOLAR-10.7B-v0.2
- training objective: freeze, instruction Tuning
## Dataset
공개 데이터 수집
- Deduplicating Training Data Makes Language Models Better 알고리즘 활용
- instruction version 1.4
## Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "jjingyeom/freeze_KoSoLAR-10.7B-v0.2_1.4_dedup"
model = AutoModelForCausalLM.from_pretrained(
model_name,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
``` |
shadowml/WestBeagle-7B | shadowml | 2024-04-01T16:00:58Z | 729 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:FelixChao/WestSeverus-7B-DPO-v2",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-29T14:45:49Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- mlabonne/NeuralBeagle14-7B
- FelixChao/WestSeverus-7B-DPO-v2
model-index:
- name: WestBeagle-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.27
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/WestBeagle-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/WestBeagle-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/WestBeagle-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.71
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/WestBeagle-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/WestBeagle-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/WestBeagle-7B
name: Open LLM Leaderboard
---
# WestBeagle-7B
WestBeagle-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
- model: FelixChao/WestSeverus-7B-DPO-v2
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralBeagle14-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.4 # fallback for rest of tensors
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "shadowml/WestBeagle-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_shadowml__WestBeagle-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.22|
|AI2 Reasoning Challenge (25-Shot)|72.27|
|HellaSwag (10-Shot) |88.29|
|MMLU (5-Shot) |65.17|
|TruthfulQA (0-shot) |71.71|
|Winogrande (5-shot) |82.00|
|GSM8k (5-shot) |71.87|
|
alchemonaut/BoreanGale-70B | alchemonaut | 2024-03-16T23:27:13Z | 729 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-02T04:33:29Z | ---
tags:
- merge
license: other
model-index:
- name: BoreanGale-70B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/BoreanGale-70B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.37
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/BoreanGale-70B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/BoreanGale-70B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 68.6
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/BoreanGale-70B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/BoreanGale-70B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=alchemonaut/BoreanGale-70B
name: Open LLM Leaderboard
---
<img src=https://huggingface.co/alchemonaut/BoreanGale-70B/resolve/main/bg.png>
# BoreanGale-70B
A merge using a custom algorithm (NearSwap) of:
- [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- [Sao10K/WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2)
<br/>
<br/>
# Quants
Several quants are available thanks to community efforts.
| Type | Misc | Author |
| ----- | ----- | ----- |
| [GGUF](https://huggingface.co/Nexesenex/alchemonaut_BoreanGale-70B-iMat.GGUF) | iMat Q3 | Nexesenex |
| [GGUF](https://huggingface.co/mradermacher/BoreanGale-70B-i1-GGUF) | iMat | mradermacher |
| [GGUF](https://huggingface.co/mradermacher/BoreanGale-70B-GGUF) | Full Set | mradermacher |
| [GGUF](https://huggingface.co/LoneStriker/BoreanGale-70B-GGUF) | Misc | LoneStriker |
| [exl2](https://huggingface.co/LoneStriker/BoreanGale-70B-2.4bpw-h6-exl2) | 2.4 bpw | LoneStriker |
| [exl2](https://huggingface.co/LoneStriker/BoreanGale-70B-3.5bpw-h6-exl2) | 3.5 bpw | LoneStriker |
| [exl2](https://huggingface.co/LoneStriker/BoreanGale-70B-4.0bpw-h6-exl2) | 4.0 bpw | LoneStriker |
| [exl2](https://huggingface.co/LoneStriker/BoreanGale-70B-4.65bpw-h6-exl2) | 4.65 bpw | LoneStriker |
# NearSwap Algorithm
NearSwap retains most of the weights of the base model (Miqu), but when a weight is similar between the two, it is interpolated to the secondary model (WinterGoddess) value. A parameter *t* specifies the sameness threshold. When the distance between two values is below *t*, the weight from the secondary model (WinterGoddess) is used.
This version of the model uses *t* = 0.001. At this *t*, about 10% of weights are fully switched to WinterGoddess. Model quality rapidly degrades above *t* = 0.0025:
- *t* = 0.0001 (~0.8% full swap): [QuartetAnemoi-70B-t0.0001](https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001)
- *t* = 0.0003 (~2% full swap)
- *t* = 0.001 (~10% full swap): This model
- *t* = 0.0025 (~18% full swap): Generates one paragraph okay, but then reverts to garbage
- *t* = 0.005 (~35% full swap): Garbage; semi-related word lists
- *t* = 0.01 (~55% full swap): Garbage; pseudorandom tokens output
NearSwap implementation:
```
t: Union[float, np.ndarray],
v0: Union[np.ndarray, torch.Tensor],
v1: Union[np.ndarray, torch.Tensor],
...
lweight = numpy.absolute(v0-v1)
lweight = t / lweight
lweight = numpy.nan_to_num(lweight, nan=1.0, posinf=1.0, neginf=1.0)
numpy.clip(lweight, a_min=0.0, a_max=1.0, out=lweight)
res = lerp(lweight,v0,v1)
```
<br/>
<br/>
# License and Use
Since the ultimate origin of Miqu is at this time unknown beyond speculation, this model is for noncommercial research use only.
<br/>
<br/>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_alchemonaut__BoreanGale-70B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.48|
|AI2 Reasoning Challenge (25-Shot)|73.89|
|HellaSwag (10-Shot) |89.37|
|MMLU (5-Shot) |75.19|
|TruthfulQA (0-shot) |68.6|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |67.32|
|
Vasanth/Beast-Soul-new | Vasanth | 2024-02-03T06:19:40Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"udkai/Turdus",
"flemmingmiguel/MBX-7B",
"base_model:udkai/Turdus",
"base_model:flemmingmiguel/MBX-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-03T04:20:33Z | ---
tags:
- merge
- mergekit
- lazymergekit
- udkai/Turdus
- flemmingmiguel/MBX-7B
base_model:
- udkai/Turdus
- flemmingmiguel/MBX-7B
license: apache-2.0
---
# Beast-Soul-new
Beast-Soul-new is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
* [flemmingmiguel/MBX-7B](https://huggingface.co/flemmingmiguel/MBX-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: udkai/Turdus
layer_range: [0, 32]
- model: flemmingmiguel/MBX-7B
layer_range: [0, 32]
merge_method: slerp
base_model: udkai/Turdus
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Vasanth/Beast-Soul-new"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
l3utterfly/tinyllama-1.1b-layla-v1 | l3utterfly | 2024-02-04T00:24:32Z | 729 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T00:15:34Z | ---
license: apache-2.0
language:
- en
---
# Model Card
### Model Description
TinyLlama 1.1B fine-tuned using ShareGPT datasets for multi-turn conversations.
- **Developed by:** l3utterfly
- **Funded by:** Layla Network
- **Model type:** Llama2
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** TinyLlama
## Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
```
User:
Assistant:
``` |
Xenon1/Xenon-1 | Xenon1 | 2024-02-04T04:36:19Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Xenon-1",
"conversational",
"en",
"arxiv:2401.10020",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T04:27:37Z | ---
language:
- en
license: apache-2.0
tags:
- mistral
- Xenon-1
pipeline_tag: text-generation
---
# Model Card for Xenon-1
Mistral-7B-v0.1 model fine-tuned on the Ultrafeedback dataset using techinques shown in the paper [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Xenon1/Xenon-1")
tokenizer = AutoTokenizer.from_pretrained("Xenon1/Xenon-1")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer |
nextai-team/Moe-3x7b-QA-Code-Inst | nextai-team | 2024-02-04T13:51:51Z | 729 | 4 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"code",
"reasoning",
"mistral",
"QA",
"MOE",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T13:21:23Z | ---
library_name: transformers
tags:
- code
- reasoning
- mixtral
- mistral
- QA
- MOE
license: apache-2.0
language:
- en
---
Model Details
Model Name: Moe-3x7b-QA-Code-Inst
Publisher: nextai-team
Model Type: Question Answering & Code Generation
Architecture: Mixture of Experts (MoE)
Model Size: 3x7 billion parameters
Overview
Moe-3x7b-QA-Code-Inst is an advanced AI model designed by the nextai-team for the purpose of enhancing question answering and code generation capabilities. Building upon the foundation of its predecessor, Moe-2x7b-QA-Code, this iteration introduces refined mechanisms and expanded training datasets to deliver more precise and contextually relevant responses.
Intended Use
This model is intended for developers, data scientists, and researchers seeking to integrate sophisticated natural language understanding and code generation functionalities into their applications. Ideal use cases include but are not limited to:
Automated coding assistance
Technical support bots
Educational tools for learning programming
Enhancing code review processes
Model Architecture
Moe-3x7b-QA-Code-Inst employs a Mixture of Experts (MoE) architecture, which allows it to efficiently manage its vast number of parameters for specialized tasks. This architecture facilitates the model's ability to discern subtle nuances in programming languages and natural language queries, leading to more accurate code generation and question answering performance.
Training Data
The model has been trained on a diverse and extensive corpus comprising technical documentation, open-source code repositories, Stack Overflow questions and answers, and other programming-related texts. Special attention has been given to ensure a wide range of programming languages and frameworks are represented in the training data to enhance the model's versatility.
Performance
Moe-3x7b-QA-Code-Inst demonstrates significant improvements in accuracy and relevance over its predecessor, particularly in complex coding scenarios and detailed technical queries. Benchmarks and performance metrics can be provided upon request.
Limitations and Biases
While Moe-3x7b-QA-Code-Inst represents a leap forward in AI-assisted coding and technical Q&A, it is not without limitations. The model may exhibit biases present in its training data, and its performance can vary based on the specificity and context of the input queries. Users are encouraged to critically assess the model's output and consider it as one of several tools in the decision-making process.
Ethical Considerations
We are committed to ethical AI development and urge users to employ Moe-3x7b-QA-Code-Inst responsibly. This includes but is not limited to avoiding the generation of harmful or unsafe code, respecting copyright and intellectual property rights, and being mindful of privacy concerns when inputting sensitive information into the model.
Usage Instructions
For detailed instructions on how to integrate and utilize Moe-3x7b-QA-Code-Inst in your projects, please refer to our GitHub repository and Hugging Face documentation.
Citation
If you use Moe-3x7b-QA-Code-Inst in your research or application, please cite it as follows:
@misc{nextai2024moe3x7b,
title={Moe-3x7b-QA-Code-Inst: Enhancing Question Answering and Code Generation with Mixture of Experts},
author={NextAI Team},
year={2024},
publisher={Hugging Face}
} |
xaviviro/OpenHermes-2.5-FLOR-6.3B | xaviviro | 2024-03-05T04:51:05Z | 729 | 1 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"FLOR",
"spanish",
"catalan",
"english",
"en",
"es",
"ca",
"dataset:teknium/OpenHermes-2.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-05T03:09:46Z | ---
language:
- en
- es
- ca
license: apache-2.0
tags:
- FLOR
- bloom
- spanish
- catalan
- english
datasets:
- teknium/OpenHermes-2.5
pipeline_tag: text-generation
model-index:
- name: OpenHermes-2.5-FLOR-6.3B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 33.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xaviviro/OpenHermes-2.5-FLOR-6.3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 54.53
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xaviviro/OpenHermes-2.5-FLOR-6.3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xaviviro/OpenHermes-2.5-FLOR-6.3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 46.12
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xaviviro/OpenHermes-2.5-FLOR-6.3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xaviviro/OpenHermes-2.5-FLOR-6.3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=xaviviro/OpenHermes-2.5-FLOR-6.3B
name: Open LLM Leaderboard
---
# OpenHermes-2.5-FLOR-6.3B
OpenHermes-2.5-FLOR-6.3B És el resultat de finetunejar el model [FLOR-6.3B](/projecte-aina/FLOR-6.3B) amb el fantàstic dataset [OpenHermes v2.5](/datasets/teknium/OpenHermes-2.5).
La millor manera d'usar **OpenHermes-2.5-FLOR-6.3B** és amb el format **ChatML**
## Quantitzats
Podeu trobar el model quantitzat i en format GGUF a [OpenHermes-2.5-FLOR-6.3B-GGUF](xaviviro/OpenHermes-2.5-FLOR-6.3B-GGUF)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_xaviviro__OpenHermes-2.5-FLOR-6.3B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |37.04|
|AI2 Reasoning Challenge (25-Shot)|33.45|
|HellaSwag (10-Shot) |54.53|
|MMLU (5-Shot) |25.18|
|TruthfulQA (0-shot) |46.12|
|Winogrande (5-shot) |62.98|
|GSM8k (5-shot) | 0.00|
|
sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-recovered | sonthenguyen | 2024-04-09T06:13:45Z | 729 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:2403.02745",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-05T07:07:16Z | ---
license: apache-2.0
---
Training hyperparameters LoRA: r=16 lora_alpha=16 lora_dropout=0.05 bias="none" task_type="CAUSAL_LM" target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
Training arguments: auto_find_batch_size=True gradient_checkpointing=True learning_rate=5e-7 lr_scheduler_type="cosine" max_steps=3922 optim="paged_adamw_32bit" warmup_steps=100
DPOTrainer: beta=0.1 max_prompt_length=1024 max_length=1536
Arxiv link: https://arxiv.org/abs/2403.02745 |
TeeZee/DarkSapling-7B-v1.1 | TeeZee | 2024-03-04T14:35:27Z | 729 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"not-for-all-audiences",
"merge",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-06T21:10:42Z | ---
language:
- en
license: apache-2.0
tags:
- mistral
- not-for-all-audiences
- merge
pipeline_tag: text-generation
inference: false
model-index:
- name: DarkSapling-7B-v1.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkSapling-7B-v1.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkSapling-7B-v1.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.47
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkSapling-7B-v1.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.04
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkSapling-7B-v1.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkSapling-7B-v1.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/DarkSapling-7B-v1.1
name: Open LLM Leaderboard
---
# DarkSapling-7B-v1.1

## Model Details
- A result of 4 models merge.
- models used for merge:
[cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)
[KoboldAI/Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)
[KoboldAI/Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)
[cognitivecomputations/samantha-mistral-7b](https://huggingface.co/cognitivecomputations/samantha-mistral-7b)
- See [mergekit-config.yml](https://huggingface.co/TeeZee/DarkSapling-7B-v1.1/resolve/main/mergekit-config.yml) for details on the merge method used.
**Warning: This model can produce NSFW content!**
## Results
- a little different than version v1.0, more romantic and empathetic.
- best for one-on-one ERP.
- produces SFW nad NSFW content without issues, switches context seamlessly.
- sticks to character card
- pretty smart due to mistral, empathetic after Samantha and sometimes produces dark scenarions - Erebus.
- storytelling is satisfactory due to Holodeck
- good at following instructions
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__DarkSapling-7B-v1.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.80|
|AI2 Reasoning Challenge (25-Shot)|63.48|
|HellaSwag (10-Shot) |85.09|
|MMLU (5-Shot) |64.47|
|TruthfulQA (0-shot) |52.04|
|Winogrande (5-shot) |78.53|
|GSM8k (5-shot) |45.19|
|
migtissera/Tess-72B-v1.5b | migtissera | 2024-03-10T21:46:57Z | 729 | 15 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-08T17:34:38Z | ---
license: other
license_name: qwen-72b-licence
license_link: https://huggingface.co/Qwen/Qwen-72B/blob/main/LICENSE
model-index:
- name: Tess-72B-v1.5b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-72B-v1.5b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.53
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-72B-v1.5b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.63
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-72B-v1.5b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.99
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-72B-v1.5b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-72B-v1.5b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Tess-72B-v1.5b
name: Open LLM Leaderboard
---
<br>

<br>
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-72B-v1.5b was trained on the Qwen-72B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Tess-72B-v1.5b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |77.30|
|AI2 Reasoning Challenge (25-Shot)|71.25|
|HellaSwag (10-Shot) |85.53|
|MMLU (5-Shot) |76.63|
|TruthfulQA (0-shot) |71.99|
|Winogrande (5-shot) |81.45|
|GSM8k (5-shot) |76.95|
|
yam-peleg/Experiment2-7B | yam-peleg | 2024-02-11T10:40:28Z | 729 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-10T15:26:05Z | ---
license: apache-2.0
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1 | MaziyarPanahi | 2024-03-04T13:33:57Z | 729 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-12T20:01:44Z | ---
license: apache-2.0
library_name: transformers
tags:
- merge
pipeline_tag: text-generation
model-index:
- name: TheTop-5x7B-Instruct-T-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.63
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 70.78
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-T-v0.1
name: Open LLM Leaderboard
---
Merge of top 7B models with TIES method
> mergekit is a toolkit for merging pre-trained language models. mergekit uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention.
## Eval

```python
{
"all": {
"acc": 0.6487801278765712,
"acc_stderr": 0.03219011246717665,
"acc_norm": 0.6479445077777353,
"acc_norm_stderr": 0.032868022907407396,
"mc1": 0.5862913096695227,
"mc1_stderr": 0.0172408618120998,
"mc2": 0.7078078883926877,
"mc2_stderr": 0.015097515102384168
},
"harness|arc:challenge|25": {
"acc": 0.7167235494880546,
"acc_stderr": 0.013167478735134575,
"acc_norm": 0.7363481228668942,
"acc_norm_stderr": 0.012875929151297044
},
"harness|hellaswag|10": {
"acc": 0.7321250746863175,
"acc_stderr": 0.004419469983939178,
"acc_norm": 0.8884684325831508,
"acc_norm_stderr": 0.0031414591751392717
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.02783491252754407,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.02783491252754407
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.75,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.036146654241808254,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.036146654241808254
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.548936170212766,
"acc_stderr": 0.032529096196131965,
"acc_norm": 0.548936170212766,
"acc_norm_stderr": 0.032529096196131965
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.025355741263055277,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.025355741263055277
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.49206349206349204,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.49206349206349204,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7967741935483871,
"acc_stderr": 0.02289168798455496,
"acc_norm": 0.7967741935483871,
"acc_norm_stderr": 0.02289168798455496
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.803030303030303,
"acc_stderr": 0.028335609732463362,
"acc_norm": 0.803030303030303,
"acc_norm_stderr": 0.028335609732463362
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.020986854593289733,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.020986854593289733
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6487179487179487,
"acc_stderr": 0.024203665177902803,
"acc_norm": 0.6487179487179487,
"acc_norm_stderr": 0.024203665177902803
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.02874204090394848,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.02874204090394848
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6554621848739496,
"acc_stderr": 0.03086868260412162,
"acc_norm": 0.6554621848739496,
"acc_norm_stderr": 0.03086868260412162
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8403669724770643,
"acc_stderr": 0.015703498348461763,
"acc_norm": 0.8403669724770643,
"acc_norm_stderr": 0.015703498348461763
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5046296296296297,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.5046296296296297,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.026756401538078962,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.026756401538078962
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7721518987341772,
"acc_stderr": 0.02730348459906944,
"acc_norm": 0.7721518987341772,
"acc_norm_stderr": 0.02730348459906944
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7423312883435583,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.7423312883435583,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7475728155339806,
"acc_stderr": 0.04301250399690878,
"acc_norm": 0.7475728155339806,
"acc_norm_stderr": 0.04301250399690878
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.80970625798212,
"acc_stderr": 0.014036945850381396,
"acc_norm": 0.80970625798212,
"acc_norm_stderr": 0.014036945850381396
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7369942196531792,
"acc_stderr": 0.023703099525258172,
"acc_norm": 0.7369942196531792,
"acc_norm_stderr": 0.023703099525258172
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.47150837988826816,
"acc_stderr": 0.016695329746015796,
"acc_norm": 0.47150837988826816,
"acc_norm_stderr": 0.016695329746015796
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818733,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818733
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.025583062489984813,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.025583062489984813
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.024383665531035457,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.024383665531035457
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.475177304964539,
"acc_stderr": 0.029790719243829727,
"acc_norm": 0.475177304964539,
"acc_norm_stderr": 0.029790719243829727
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.470013037809648,
"acc_stderr": 0.01274724896707906,
"acc_norm": 0.470013037809648,
"acc_norm_stderr": 0.01274724896707906
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6691176470588235,
"acc_stderr": 0.028582709753898445,
"acc_norm": 0.6691176470588235,
"acc_norm_stderr": 0.028582709753898445
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6584967320261438,
"acc_stderr": 0.019184639328092487,
"acc_norm": 0.6584967320261438,
"acc_norm_stderr": 0.019184639328092487
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.044612721759105085,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.044612721759105085
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727665,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727665
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5862913096695227,
"mc1_stderr": 0.0172408618120998,
"mc2": 0.7078078883926877,
"mc2_stderr": 0.015097515102384168
},
"harness|winogrande|5": {
"acc": 0.8579321231254933,
"acc_stderr": 0.009812000391679367
},
"harness|gsm8k|5": {
"acc": 0.6648976497346475,
"acc_stderr": 0.013001948176422954
}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-T-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.96|
|AI2 Reasoning Challenge (25-Shot)|73.63|
|HellaSwag (10-Shot) |88.85|
|MMLU (5-Shot) |64.22|
|TruthfulQA (0-shot) |70.78|
|Winogrande (5-shot) |85.79|
|GSM8k (5-shot) |66.49|
|
cazure87/5DrGphi5iUWGDhhtUhVDLPVkXJ3gePFtyt8jZNhdXfETXMXA_vgg | cazure87 | 2024-03-02T06:47:23Z | 729 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-02-13T16:29:37Z | Entry not found |
seyf1elislam/KuTrix-7b | seyf1elislam | 2024-03-16T23:49:38Z | 729 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:CultriX/NeuralTrix-7B-dpo",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T02:01:59Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- CultriX/NeuralTrix-7B-dpo
model-index:
- name: KuTrix-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/KuTrix-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.94
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/KuTrix-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.28
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/KuTrix-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 70.85
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/KuTrix-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/KuTrix-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/KuTrix-7b
name: Open LLM Leaderboard
---
# KuTrix-7b
This is a merge of pre-trained language models created using mergekit.
# Quantized versions :
- [**GGUF**](https://huggingface.co/seyf1elislam/KuTrix-7b-GGUF)
## Merge Details
### Merge Method
This model was merged using the **DARE TIES** merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
## Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
weight: 0.49
density: 0.6
- model: CultriX/NeuralTrix-7B-dpo
parameters:
weight: 0.4
density: 0.6
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## Usage Example
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "seyf1elislam/KuTrix-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_seyf1elislam__KuTrix-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.42|
|AI2 Reasoning Challenge (25-Shot)|70.48|
|HellaSwag (10-Shot) |87.94|
|MMLU (5-Shot) |65.28|
|TruthfulQA (0-shot) |70.85|
|Winogrande (5-shot) |81.93|
|GSM8k (5-shot) |70.05|
|
Yuma42/KangalKhan-Sapphire-7B | Yuma42 | 2024-03-05T10:53:43Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"argilla/CapybaraHermes-2.5-Mistral-7B",
"argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"conversational",
"base_model:argilla/CapybaraHermes-2.5-Mistral-7B",
"base_model:argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-15T10:00:29Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- argilla/CapybaraHermes-2.5-Mistral-7B
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
base_model:
- argilla/CapybaraHermes-2.5-Mistral-7B
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
model-index:
- name: KangalKhan-Sapphire-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-Sapphire-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.34
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-Sapphire-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-Sapphire-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.09
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-Sapphire-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-Sapphire-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-Sapphire-7B
name: Open LLM Leaderboard
---
# KangalKhan-Sapphire-7B
KangalKhan-Sapphire-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
* [argilla/distilabeled-OpenHermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-OpenHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: argilla/CapybaraHermes-2.5-Mistral-7B
layer_range: [0, 32]
- model: argilla/distilabeled-OpenHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: argilla/CapybaraHermes-2.5-Mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-Sapphire-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Yuma42__KangalKhan-Sapphire-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.52|
|AI2 Reasoning Challenge (25-Shot)|66.30|
|HellaSwag (10-Shot) |85.34|
|MMLU (5-Shot) |63.32|
|TruthfulQA (0-shot) |56.09|
|Winogrande (5-shot) |78.14|
|GSM8k (5-shot) |61.94|
|
shahzebnaveed/NeuralHermes-2.5-Mistral-7B | shahzebnaveed | 2024-02-16T17:32:36Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-15T14:52:31Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for NeuralHermes 2.5 - Mistral 7B
NeuralHermes is based on the teknium/OpenHermes-2.5-Mistral-7B model that has been further fine-tuned with Direct Preference Optimization (DPO) using the Intel/orca_dpo_pairs dataset, reformatted with the ChatML template.
It is directly inspired by the RLHF process described by Intel/neural-chat-7b-v3-1's authors to improve performance.
**IMPORTANT**
- This model was only run for 2 steps before GPU went out of memory. Hence, this is not completely fine-tuned with DPO.
- Secondly, to make it run over a small GPU, I purposefully reduced the parameters (# of LORA adapters, alpha, etc.). The values are therefore not the ideal.
## Uses
You can use the following code to use this model:
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=new_model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text']) |
jeiku/NarrativeNexus_7B | jeiku | 2024-02-16T01:15:07Z | 729 | 6 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:jeiku/Cookie_7B",
"base_model:jeiku/SpaghettiOs_7B",
"base_model:jeiku/Rainbow_69_7B",
"base_model:jeiku/Paranoid_Android_7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-16T00:04:44Z | ---
base_model:
- jeiku/Cookie_7B
- jeiku/SpaghettiOs_7B
- jeiku/Rainbow_69_7B
- jeiku/Paranoid_Android_7B
library_name: transformers
tags:
- mergekit
- merge
license: other
---
# Nexus
This is my new favorite 7B, made from a merge of tunes and merges that I've tossed together over the last week or so. This model seems to be greater than the sum of its parts, and is performing well in riddle testing and markdown role playing. I have also been using this model to generate 1000 token narratives that I am using to improve custom story datasets for use with future models. It is highly descriptive and readily fills a futanari character. You can likely utilize it for female or male characters as well. Enjoy!
GGUF here: https://huggingface.co/jeiku/NarrativeNexus_7B_GGUF

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [jeiku/Cookie_7B](https://huggingface.co/jeiku/Cookie_7B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/SpaghettiOs_7B](https://huggingface.co/jeiku/SpaghettiOs_7B)
* [jeiku/Rainbow_69_7B](https://huggingface.co/jeiku/Rainbow_69_7B)
* [jeiku/Paranoid_Android_7B](https://huggingface.co/jeiku/Paranoid_Android_7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: jeiku/Cookie_7B
parameters:
normalize: true
models:
- model: jeiku/SpaghettiOs_7B
parameters:
weight: 1
- model: jeiku/Rainbow_69_7B
parameters:
weight: 1
- model: jeiku/Paranoid_Android_7B
parameters:
weight: 0.75
dtype: float16
``` |
NLUHOPOE/test-case-0 | NLUHOPOE | 2024-02-16T04:23:03Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Open-Orca/SlimOrca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-16T01:07:05Z | ---
license: apache-2.0
datasets:
- Open-Orca/SlimOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Juhwan Lee
* Model Type: Large Language Model
# Model Architecture
This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task.
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample Open-Orca dataset. (We finetune the 100,000 dataset)
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0 |
giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1 | giraffe176 | 2024-03-04T16:22:03Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:NeverSleep/Noromaid-7B-0.4-DPO",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:cognitivecomputations/samantha-1.1-westlake-7b",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-18T03:02:27Z | ---
license: cc-by-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- teknium/OpenHermes-2.5-Mistral-7B
- NeverSleep/Noromaid-7B-0.4-DPO
- mistralai/Mistral-7B-v0.1
- Open-Orca/Mistral-7B-OpenOrca
- cognitivecomputations/samantha-1.1-westlake-7b
model-index:
- name: Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.92
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1
name: Open LLM Leaderboard
---
# Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
* [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
* [cognitivecomputations/samantha-1.1-westlake-7b](https://huggingface.co/cognitivecomputations/samantha-1.1-westlake-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: cognitivecomputations/samantha-1.1-westlake-7b
parameters:
density: 0.59
weight: 0.15
- model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
density: 0.59
weight: 0.3
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
density: 0.49
weight: 0.3
- model: Open-Orca/Mistral-7B-OpenOrca
parameters:
density: 0.49
weight: 0.25
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_giraffe176__Open_Maid_Samantha_Hermes_Orca_dare_tiesv0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.80|
|AI2 Reasoning Challenge (25-Shot)|65.87|
|HellaSwag (10-Shot) |85.48|
|MMLU (5-Shot) |64.50|
|TruthfulQA (0-shot) |51.92|
|Winogrande (5-shot) |80.35|
|GSM8k (5-shot) |58.68|
|
LordNoah/latent_gpt2_medium_alpaca_e3 | LordNoah | 2024-02-19T07:49:25Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-19T07:41:42Z | ---
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NLUHOPOE/test-case-1 | NLUHOPOE | 2024-02-23T02:01:13Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Open-Orca/SlimOrca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-23T00:30:16Z | ---
license: apache-2.0
datasets:
- Open-Orca/SlimOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Juhwan Lee
* Model Type: Large Language Model
# Model Architecture
This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task.
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample SlimOrca dataset.
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0 |
Kquant03/Kaltsit-16x7B-bf16 | Kquant03 | 2024-04-06T05:34:17Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-23T02:57:50Z | ---
license: apache-2.0
language:
- en
thumbnail: "https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/0IVdEMftRMGDzJ7ZNYfXq.png"
tags:
- moe
- merge
---

# "My wish? It is to protect your and Amiya's wishes."
[Join our Discord!](https://discord.gg/uT4CzytfYW)
Dr. Kal'stit helped me with some dependency issues while I was trying to merge Gemma...when Gemma didn't merge I decided to make this. I hope it scores high across evals...here's the rough config:
- [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) - base
- [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp) - expert #1
- [macadeliccc/MonarchLake-7B](https://huggingface.co/macadeliccc/MonarchLake-7B) - expert #2
- [paulml/OmniBeagleSquaredMBX-v3-7B-v2](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B-v2) - expert #3
- [louisbrulenaudet/Pearl-7B-slerp](https://huggingface.co/louisbrulenaudet/Pearl-7B-slerp) - expert #4
- [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo) - expert #5
- [FelixChao/Capricorn-7B-DPO](https://huggingface.co/FelixChao/Capricorn-7B-DPO) - expert #6
- [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo) - expert #7
- [louisbrulenaudet/Pearl-7B-slerp](https://huggingface.co/louisbrulenaudet/Pearl-7B-slerp) - expert #8
- [openagi-project/OpenAGI-7B-v0.1](https://huggingface.co/openagi-project/OpenAGI-7B-v0.1) - expert #9
- [FelixChao/Capricorn-7B-DPO](https://huggingface.co/FelixChao/Capricorn-7B-DPO) - expert #10
- [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp) - expert #11
- [macadeliccc/MonarchLake-7B](https://huggingface.co/macadeliccc/MonarchLake-7B) - expert #12
- [macadeliccc/MonarchLake-7B](https://huggingface.co/macadeliccc/MonarchLake-7B) - expert #13
- [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) - expert #14
- [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) - expert #15
- [macadeliccc/MonarchLake-7B](https://huggingface.co/macadeliccc/MonarchLake-7B) - expert #16
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. |
Locutusque/hyperion-medium-preview | Locutusque | 2024-03-02T07:18:37Z | 729 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-24T04:31:16Z | ---
language:
- en
license: apache-2.0
library_name: transformers
model-index:
- name: hyperion-medium-preview
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.67
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/hyperion-medium-preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.67
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/hyperion-medium-preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.73
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/hyperion-medium-preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.93
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/hyperion-medium-preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/hyperion-medium-preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 40.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/hyperion-medium-preview
name: Open LLM Leaderboard
---
# Model Card for M4-ai/hyperion-medium-preview

## Model Details
**Model Name**: M4-ai/hyperion-medium-preview
**Base Model**: mistralai/Mistral-7B-v0.1
**Publisher**: M4-ai
**Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning.
**Language**: Multi-domain, English language.
**License**: Apache-2.0
## Model Description
`M4-ai/hyperion-medium-preview` is a state-of-the-art language model fine-tuned on the Hyperion dataset for advanced reasoning across scientific domains. This model is designed to handle complex inquiries and instructions, leveraging the diverse and rich information contained in the Hyperion dataset. Its primary use cases include but are not limited to complex question answering, conversational understanding, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.
## Intended Use
This model is intended for researchers and practitioners looking for a powerful tool to tackle challenging problems in scientific domains. It can be used in the following scenarios:
- AI-driven tutoring systems for science, medicine, mathematics, and computer science.
- Assistive tools for professionals requiring fast and accurate domain-specific information retrieval.
- Platforms that require conversational AI capabilities with a focus on technical and scientific reasoning.
- Automation in code generation and understanding complex programming context.
## Training Data
The `M4-ai/hyperion-medium-preview` model was fine-tuned on the Hyperion dataset, which amalgamates various datasets rich in diversity and complexity, including programming, medical texts, mathematical problems, and reasoning tasks.
## Evaluation Results
Coming soon...
## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "M4-ai/hyperion-medium-preview"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# For a text generation task
input_text = "<|im_start|>user\nWhat are the implications of Einstein's theory of relativity in modern physics?<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate a response
outputs = model.generate(input_ids, max_length=200, num_return_sequences=1, temperature=0.8, top_p=0.95, top_k=40, repetition_penalty=1.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Known Limitations
The diversity of the dataset could lead to inconsistencies in the model's responses due to variations in data formatting and annotation quality.
## Licensing Information
This model is released under the Apache-2.0 license.
## Citation Information
If you use M4-ai/hyperion-medium-preview in your research, please cite the Hyperion dataset as follows:
```
@misc{sebastian_gabarain_2024,
title = {Hyperion-1: Illuminating the Path to Advanced Reasoning with a High-Quality, Multidisciplinary Question Answering Dataset},
author = {Sebastian Gabarain},
publisher = {HuggingFace},
year = {2024},
url = {https://huggingface.co/datasets/Locutusque/hyperion-v1.0}
}
```
## Quants
- exl2 by bartowski: https://huggingface.co/bartowski/hyperion-medium-preview-exl2
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__hyperion-medium-preview)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.67|
|AI2 Reasoning Challenge (25-Shot)|60.67|
|HellaSwag (10-Shot) |83.67|
|MMLU (5-Shot) |63.73|
|TruthfulQA (0-shot) |42.93|
|Winogrande (5-shot) |78.53|
|GSM8k (5-shot) |40.49|
|
liminerity/Blur-7b-slerp-v1.41 | liminerity | 2024-03-11T18:30:50Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"bardsai/jaskier-7b-dpo-v5.6",
"liminerity/merge",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-25T21:57:52Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- bardsai/jaskier-7b-dpo-v5.6
- liminerity/merge
model-index:
- name: Blur-7b-slerp-v1.41
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.78
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.41
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.41
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.41
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.23
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.41
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.41
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-slerp-v1.41
name: Open LLM Leaderboard
---
# Blur-7b-slerp-v1.41
Blur-7b-slerp-v1.41 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6)
* [liminerity/merge](https://huggingface.co/liminerity/merge)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: bardsai/jaskier-7b-dpo-v5.6
layer_range: [0, 32]
- model: liminerity/merge
layer_range: [0, 32]
merge_method: slerp
base_model: bardsai/jaskier-7b-dpo-v5.6
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
#im pretty sure this will be bricked so
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__Blur-7b-slerp-v1.41)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.98|
|AI2 Reasoning Challenge (25-Shot)|72.78|
|HellaSwag (10-Shot) |88.65|
|MMLU (5-Shot) |64.84|
|TruthfulQA (0-shot) |74.23|
|Winogrande (5-shot) |83.90|
|GSM8k (5-shot) |71.49|
|
yam-peleg/Experiment27-7B | yam-peleg | 2024-02-27T21:30:37Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-27T19:43:24Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
---
**Experiment27-7B**
An experiment for testing and refining a specific training and evaluation pipeline research framework.
This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance.
The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs.
The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
More details in the future experiments.
---
license: apache-2.0
--- |
DreadPoor/WestMonarchLasers-7B-slerp | DreadPoor | 2024-03-07T12:56:47Z | 729 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"abideen/AlphaMonarch-laser",
"base_model:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"base_model:abideen/AlphaMonarch-laser",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-27T20:48:59Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- abideen/AlphaMonarch-laser
base_model:
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- abideen/AlphaMonarch-laser
model-index:
- name: WestMonarchLasers-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.44
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/WestMonarchLasers-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.66
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/WestMonarchLasers-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.73
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/WestMonarchLasers-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 72.4
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/WestMonarchLasers-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/WestMonarchLasers-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.63
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/WestMonarchLasers-7B-slerp
name: Open LLM Leaderboard
---
# WestMonarchLasers-7B-slerp
WestMonarchLasers-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
* [abideen/AlphaMonarch-laser](https://huggingface.co/abideen/AlphaMonarch-laser)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
layer_range: [0, 32]
- model: abideen/AlphaMonarch-laser
layer_range: [0, 32]
merge_method: slerp
base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DreadPoor/WestMonarchLasers-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DreadPoor__WestMonarchLasers-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.23|
|AI2 Reasoning Challenge (25-Shot)|72.44|
|HellaSwag (10-Shot) |88.66|
|MMLU (5-Shot) |64.73|
|TruthfulQA (0-shot) |72.40|
|Winogrande (5-shot) |85.56|
|GSM8k (5-shot) |67.63|
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.