modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
ewqr2130/llama_sft_longer | ewqr2130 | 2024-01-31T18:11:42Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-31T18:05:26Z | ---
license: apache-2.0
---
Model card
Files and versions
Community
Settings
No model card
New: Create and edit this model card directly on the website!
Downloads last month
0
Unable to determine this model's library. Check the docs .Model card
Files and versions
Community
Settings
No model card
New: Create and edit this model card directly on the website!
Downloads last month
0
Unable to determine this model's library. Check the docs .Model card
Files and versions
Community
Settings
No model card
New: Create and edit this model card directly on the website!
Downloads last month
0
Unable to determine this model's library. Check the docs .Model card
Files and versions
Community
Settings
No model card
New: Create and edit this model card directly on the website!
Downloads last month
0
Unable to determine this model's library. Check the docs . |
sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO | sonthenguyen | 2024-02-04T02:04:30Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-02T15:50:58Z | ---
license: apache-2.0
---
Training hyperparameters
LoRA:
r=16
lora_alpha=16
lora_dropout=0.05
bias="none"
task_type="CAUSAL_LM"
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
Training arguments:
auto_find_batch_size=True
gradient_checkpointing=True
learning_rate=5e-7
lr_scheduler_type="cosine"
max_steps=3922
optim="paged_adamw_32bit"
warmup_steps=100
DPOTrainer:
beta=0.1
max_prompt_length=1024
max_length=1536 |
Danielbrdz/Barcenas-Orca-2-7b | Danielbrdz | 2024-02-02T20:39:41Z | 739 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"es",
"dataset:HuggingFaceH4/no_robots",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-02T18:10:13Z | ---
license: other
license_name: microsoft-research-license
license_link: LICENSE
datasets:
- HuggingFaceH4/no_robots
language:
- en
- es
---
Model based on Microsoft Orca 2 7b.
Trained with the HuggingFaceH4/no_robots dataset for better natural conversation.
Symbolic model for me at least.
Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽 |
s3nh/Severusectum-7B-DPO | s3nh | 2024-03-04T12:39:33Z | 739 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"base_model:FelixChao/Sectumsempra-7B-DPO",
"base_model:FelixChao/WestSeverus-7B-DPO-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-03T22:51:05Z | ---
language:
- en
license: mit
library_name: transformers
tags:
- mergekit
- merge
base_model:
- FelixChao/Sectumsempra-7B-DPO
- FelixChao/WestSeverus-7B-DPO-v2
pipeline_tag: text-generation
model-index:
- name: Severusectum-7B-DPO
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.5
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 72.45
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=s3nh/Severusectum-7B-DPO
name: Open LLM Leaderboard
---

# Severusectum-7B-DPO
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [FelixChao/Sectumsempra-7B-DPO](https://huggingface.co/FelixChao/Sectumsempra-7B-DPO)
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: FelixChao/Sectumsempra-7B-DPO
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model: FelixChao/Sectumsempra-7B-DPO
- layer_range: [0, 32]
model: FelixChao/WestSeverus-7B-DPO-v2
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_s3nh__Severusectum-7B-DPO)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.18|
|AI2 Reasoning Challenge (25-Shot)|71.50|
|HellaSwag (10-Shot) |88.55|
|MMLU (5-Shot) |64.79|
|TruthfulQA (0-shot) |72.45|
|Winogrande (5-shot) |83.27|
|GSM8k (5-shot) |70.51|
|
Xenon1/Xenon-3 | Xenon1 | 2024-02-04T04:56:38Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Xenon-3",
"conversational",
"en",
"arxiv:2401.10020",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T04:52:26Z | ---
language:
- en
license: apache-2.0
tags:
- mistral
- Xenon-3
pipeline_tag: text-generation
---
# Model Card for Xenon-3
Mistral-7B-v0.1 model fine-tuned on the Ultrafeedback dataset using techinques shown in the paper [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Xenon1/Xenon-3")
tokenizer = AutoTokenizer.from_pretrained("Xenon1/Xenon-3")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer |
BioMistral/BioMistral-7B-TIES | BioMistral | 2024-02-19T15:37:10Z | 739 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"ties",
"medical",
"biology",
"conversational",
"fr",
"en",
"pl",
"es",
"it",
"ro",
"de",
"nl",
"dataset:pubmed",
"arxiv:2306.01708",
"arxiv:2402.10373",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:BioMistral/BioMistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-05T22:05:17Z | ---
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
- BioMistral/BioMistral-7B
library_name: transformers
tags:
- mergekit
- merge
- ties
- medical
- biology
license: apache-2.0
datasets:
- pubmed
language:
- fr
- en
- pl
- es
- it
- ro
- de
- nl
pipeline_tag: text-generation
---
# BioMistral-7B-mistral7instruct-ties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.1
- model: BioMistral/BioMistral-7B
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
normalize: true
dtype: bfloat16
```
<p align="center">
<img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/>
</p>
# BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
**Abstract:**
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges.
In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released.
**Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes.
# 1. BioMistral models
**BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC.
| Model Name | Base Model | Model Type | Sequence Length | Download |
|:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:|
| BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) |
| BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) |
| BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) |
# 2. Quantized Models
| Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download |
|:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:|
| BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) |
| BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) |
| BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) |
| BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) |
# 2. Using BioMistral
You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
Loading the model and tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
model = AutoModel.from_pretrained("BioMistral/BioMistral-7B")
```
# 3. Supervised Fine-tuning Benchmark
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. |
|-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------|
| **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 |
| **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 |
| | | | | | | | | | | | |
| **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 |
| **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** |
| **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 |
| **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> |
| | | | | | | | | | | | |
| **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 |
| **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 |
| **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 |
| **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 |
| | | | | | | | | | | | |
| **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 |
Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT.
# Citation BibTeX
Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373)
```bibtex
@misc{labrak2024biomistral,
title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains},
author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour},
year={2024},
eprint={2402.10373},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
|
abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft | abhinand | 2024-03-04T12:55:57Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:abhinand/ultrachat_200k_sharegpt",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-06T19:00:10Z | ---
language:
- en
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
- abhinand/ultrachat_200k_sharegpt
model-index:
- name: TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 33.79
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 58.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 36.22
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 5.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
name: Open LLM Leaderboard
---
# TinyLLaMA OpenHermes2.5 [Work in Progress]
This a finetune of TinyLLaMA base model finetuned on [OpenHermes 2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) and [UltraChat 200k](https://huggingface.co/datasets/abhinand/ultrachat_200k_sharegpt) for a single epoch.
Training was generously supported by [Jarvislabs.ai](https://jarvislabs.ai/).
If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated.
[](https://www.buymeacoffee.com/abhinand.b)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
is_llama_derived_model: true
# huggingface repo
datasets:
- path: teknium/OpenHermes-2.5
type: sharegpt
conversation: chatml
train_on_split: train
- path: abhinand/ultrachat_200k_sharegpt
type: sharegpt
conversation: chatml
train_on_split: train
load_in_4bit: false
load_in_8bit: false
bf16: true # require >=ampere
chat_template: chatml
dataset_prepared_path: last_run_prepared_path
hub_model_id: abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v1.0
group_by_length: false
val_set_size: 0.0
sequence_len: 2048
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
lora_modules_to_save:
- embed_tokens
- lm_head
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
output_dir: /home/tiny-llama/trained_models
gradient_accumulation_steps: 2
micro_batch_size: 32
eval_batch_size: 32
num_epochs: 1
logging_steps: 1
save_steps: 50
save_total_limit: 3
save_safetensors: true
gradient_checkpointing: true
lr_scheduler: cosine
optimizer: "adamw_bnb_8bit"
adam_beta2: 0.95
adam_epsilon: 0.00001
weight_decay: 0.1
learning_rate: 0.0005
max_grad_norm: 1.0
warmup_ratio: 0.05
# warmup_steps: 100
flash_attention: true
# Resume from a specific checkpoint dir
resume_from_checkpoint:
# If resume_from_checkpoint isn't set and you simply want it to start where it left off.
# Be careful with this being turned on between different models.
# auto_resume_from_checkpoints: true
# wandb configuration if you're using it
# Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`.
wandb_mode: # "offline" to save run metadata locally and not sync to the server, "disabled" to turn off wandb
wandb_project: "tiny-llama-sft"
wandb_name:
wandb_run_id:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
tokens: # these are delimiters
- "<|im_start|>"
- "<|im_end|>"
```
</details>
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 476
- num_epochs: 1
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft)
| Metric |Value|
|---------------------------------|----:|
|Avg. |36.59|
|AI2 Reasoning Challenge (25-Shot)|33.79|
|HellaSwag (10-Shot) |58.72|
|MMLU (5-Shot) |24.52|
|TruthfulQA (0-shot) |36.22|
|Winogrande (5-shot) |60.93|
|GSM8k (5-shot) | 5.38|
|
ChuckMcSneed/Gembo-v1.1-70b | ChuckMcSneed | 2024-03-04T13:43:45Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"nsfw",
"not-for-all-audiences",
"en",
"ru",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-10T02:07:32Z | ---
language:
- en
- ru
license: llama2
tags:
- merge
- mergekit
- nsfw
- not-for-all-audiences
model-index:
- name: Gembo-v1.1-70b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1.1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1.1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.63
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1.1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.45
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1.1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1.1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1.1-70b
name: Open LLM Leaderboard
---

This is like [Gembo v1](https://huggingface.co/ChuckMcSneed/Gembo-v1-70b), but with 6-7% more human data. Does perform a bit worse on the benches(who cares? I do.), but should be able to write in more diverse styles(See [waxwing-styles.txt](waxwing-styles.txt), tested it with v1, v1 does it better.). Mainly made for RP, but should be okay as an assistant. Turned out quite good, considering the amount of LORAs I merged into it.
# Observations
- GPTisms and repetition: put temperature and rep. pen. higher, make GPTisms stop sequences
- A bit different than the ususal stuff; I'd say that it has so much slop in it that it unslops itself
- Lightly censored
- Fairly neutral, can be violent if you ask it really good, Goliath is a bit better at it
- Has a bit of optimism baked in, but it's not very severe, maybe a tiny bit more than in v1?
- Don't put too many style tags, here less is better
- Unlike v1, 1.1 knows a bit better when to stop
- Needs more wrangling than v1, but once you get it going it's good
- Sometimes can't handle '
- Moderately intelligent
- Quite creative
# Worth over v1?
Nah. I prefer hyperslop over this "humanized" one. Maybe I've been poisoned by slop.
# Naming
Internal name of this model was euryale-guano-saiga-med-janboros-kim-wing-lima-wiz-tony-d30-s40, but I decided to keep it short, and since it was iteration G in my files, I called it "Gembo".
# Prompt format
Alpaca. You can also try some other formats, I'm pretty sure it has a lot of them from all those merges.
```
### Instruction:
{instruction}
### Response:
```
# Settings
As I already mentioned, high temperature and rep.pen. works great.
For RP try something like this:
- temperature=5
- MinP=0.10
- rep.pen.=1.15
Adjust to match your needs.
# How it was created
I took Sao10K/Euryale-1.3-L2-70B (Good base model) and added
- Mikael110/llama-2-70b-guanaco-qlora (Creativity+assistant)
- IlyaGusev/saiga2_70b_lora (Creativity+assistant)
- s1ghhh/medllama-2-70b-qlora-1.1 (More data)
- v2ray/Airoboros-2.1-Jannie-70B-QLoRA (Creativity+assistant)
- Chat-Error/fiction.live-Kimiko-V2-70B (Creativity)
- alac/Waxwing-Storytelling-70B-LoRA (New, creativity)
- Doctor-Shotgun/limarpv3-llama2-70b-qlora (Creativity)
- v2ray/LLaMA-2-Wizard-70B-QLoRA (Creativity+assistant)
- v2ray/TonyGPT-70B-QLoRA (Special spice)
Then I SLERP-merged it with cognitivecomputations/dolphin-2.2-70b (Needed to bridge the gap between this wonderful mess and Smaxxxer, otherwise it's quality is low) with 0.3t and then SLERP-merged it again with ChuckMcSneed/SMaxxxer-v1-70b (Creativity) with 0.4t. For SLERP-merges I used https://github.com/arcee-ai/mergekit.
# Benchmarks (Do they even mean anything anymore?)
### NeoEvalPlusN_benchmark
[My meme benchmark.](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark)
| Test name | Gembo | Gembo 1.1 |
| ---------- | ---------- | ---------- |
| B | 2.5 | 2.5 |
| C | 1.5 | 1.5 |
| D | 3 | 3 |
| S | 7.5 | 6.75 |
| P | 5.25 | 5.25 |
| Total | 19.75 | 19 |
### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
[Leaderboard on Huggingface](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|--------------|-------|-----|---------|-----|----------|----------|-----|
|Gembo-v1-70b |70.51 |71.25|86.98 |70.85|63.25 |80.51 |50.19|
|Gembo-v1.1-70b|70.35 |70.99|86.9 |70.63|62.45 |80.51 |50.64|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChuckMcSneed__Gembo-v1.1-70b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.35|
|AI2 Reasoning Challenge (25-Shot)|70.99|
|HellaSwag (10-Shot) |86.90|
|MMLU (5-Shot) |70.63|
|TruthfulQA (0-shot) |62.45|
|Winogrande (5-shot) |80.51|
|GSM8k (5-shot) |50.64|
|
vicgalle/Mixtral-7Bx2-truthy | vicgalle | 2024-03-04T12:16:59Z | 739 | 2 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:jondurbin/truthy-dpo-v0.1",
"arxiv:1910.09700",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-10T18:16:53Z | ---
license: apache-2.0
library_name: transformers
datasets:
- jondurbin/truthy-dpo-v0.1
model-index:
- name: Mixtral-7Bx2-truthy
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/Mixtral-7Bx2-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/Mixtral-7Bx2-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.2
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/Mixtral-7Bx2-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.68
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/Mixtral-7Bx2-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/Mixtral-7Bx2-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.25
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/Mixtral-7Bx2-truthy
name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
```
"results": {
"truthfulqa_mc": {
"mc1": 0.6107711138310894,
"mc1_stderr": 0.017068552680690338,
"mc2": 0.7527999957012117,
"mc2_stderr": 0.014045181780156504
}
```
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__Mixtral-7Bx2-truthy)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.64|
|AI2 Reasoning Challenge (25-Shot)|72.18|
|HellaSwag (10-Shot) |87.88|
|MMLU (5-Shot) |65.20|
|TruthfulQA (0-shot) |74.68|
|Winogrande (5-shot) |80.66|
|GSM8k (5-shot) |67.25|
|
yam-peleg/Experiment4-7B | yam-peleg | 2024-02-11T11:30:38Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-11T10:29:57Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
liminerity/Omningotex-7b-slerp | liminerity | 2024-03-11T16:12:46Z | 739 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/binarized-ingotrix-slerp-7b",
"eren23/dpo-binarized-NeutrixOmnibe-7B",
"base_model:liminerity/binarized-ingotrix-slerp-7b",
"base_model:eren23/dpo-binarized-NeutrixOmnibe-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-12T10:34:11Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- liminerity/binarized-ingotrix-slerp-7b
- eren23/dpo-binarized-NeutrixOmnibe-7B
base_model:
- liminerity/binarized-ingotrix-slerp-7b
- eren23/dpo-binarized-NeutrixOmnibe-7B
model-index:
- name: Omningotex-7b-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Omningotex-7b-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Omningotex-7b-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Omningotex-7b-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 76.32
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Omningotex-7b-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Omningotex-7b-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Omningotex-7b-slerp
name: Open LLM Leaderboard
---
Title: Introducing Omningotex-7b: The World's Most Accurate 7B LLM
Today, I'm excited to share the creation of a groundbreaking language model, "liminerity/Omningotex-7b-slerp." This model has achieved an impressive accuracy rate of 76.33%, making it the most accurate 7B LLM in the world.
The journey to create Omningotex-7b-slerp began with an experimental process called "merging." I started with a model named "ingot-7b-slerp," which was created by merging two other LLMs, "blurred-beagle-7b-slerp" (by myself, liminerity) and "Macaroni-7b-Tied" (by andrijdavid), a total of eight times over.
After the successful creation of ingot-7b-slerp, I proceeded to merge it with another model, "dpo-binarized-NeuralTrix-7B" by eren23, using gradient slerp. The resulting model, "binarized-ingotrix-slerp-7b," achieved an accuracy rate of 76.04%.
To further enhance the model's performance, I decided to merge "binarized-ingotrix-slerp-7b" with "dpo-binarized-NeutrixOmnibe-7B" by eren23 once again. The resulting model, "Omningotex-7b," is now the most accurate 7B LLM available.
This breakthrough in LLM accuracy was achieved through a combination of careful experimentation and a deep understanding of the underlying algorithms and techniques. I believe that Omningotex-7b-slerp's success demonstrates the potential for further advancements in the field of natural language processing and artificial intelligence.
I look forward to sharing more updates and insights as I continue to explore the possibilities of LLMs and push the boundaries of what is possible in the world of AI. Stay tuned for more exciting developments in the future!
A huge thank you to Maxime Labonne and his creation of LazyMergeKit colab project. Use of it helped me gain a further grasp of the concepts at play and led to the creation of this model. I'm sure it won't be number 1 for long which excited me even more!
Next, I set out to learn how to fine-tune with the resources I have available.
My next overall goal is to try and find a way to produce a smaller model with high accuracy either through merging down using fewer layers after each merge. I may need to include finetuning between each merge or merging larger more accurate models into a smaller base while maintaining accuracy and performance. Every version of "TinyMistral" I come by seems to be bricked in the sense it spits out nonsense. Thank you for your time If you read this all the way.
# Omningotex-7B-slerp
Omningotex-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/binarized-ingotrix-slerp-7b](https://huggingface.co/liminerity/binarized-ingotrix-slerp-7b)
* [eren23/dpo-binarized-NeutrixOmnibe-7B](https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/binarized-ingotrix-slerp-7b
layer_range: [0, 32]
- model: eren23/dpo-binarized-NeutrixOmnibe-7B
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/binarized-ingotrix-slerp-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Omningotex-7b-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__Omningotex-7b-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.33|
|AI2 Reasoning Challenge (25-Shot)|73.29|
|HellaSwag (10-Shot) |88.96|
|MMLU (5-Shot) |64.69|
|TruthfulQA (0-shot) |76.32|
|Winogrande (5-shot) |84.21|
|GSM8k (5-shot) |70.51|
|
ArianAskari/SOLID-SFT-DPO-MixQV2-SOLIDRejected-SFTChosen-Zephyr-7b-beta | ArianAskari | 2024-02-13T06:21:46Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:ArianAskari/SOLID",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-13T00:16:03Z | ---
library_name: transformers
tags: []
license: apache-2.0
language:
- en
datasets: ArianAskari/SOLID
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mzio/hedgehog-mistral_7b-alpaca_clean-smd_lora_1e_3 | mzio | 2024-02-14T09:03:12Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:yahma/alpaca-cleaned",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T08:49:46Z | ---
license: apache-2.0
datasets:
- yahma/alpaca-cleaned
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
Configs
```
name: llama
model:
pretrained_model_name_or_path: 'mistralai/Mistral-7B-v0.1'
cache_dir: '/juice/scr/scr110/scr/nlp/data/neo/hub/'
return_dict: true
quantization: false
device_map: auto # null
low_cpu_mem_usage: true # false
torch_dtype: bfloat16
attn_implementation: eager # so we can load attention weights
rope_theta: 10000.0
attention:
attention_type: hedgehog_llama
feature_map: softmax_dim
feature_map_kwargs:
input_dim: 128
eps: 1e-12
# mlp: null # to set
fullspace: true
layer_idx: null # to set
learned_kernel: untied_head
learned_kernel_kwargs:
feature_dim: 128
skip_connection: false
bias: false
zero_init: false
tie_qk_kernels: false
train_qk: true
peft:
method: lora
kwargs:
r: 8 # 256
lora_alpha: 16 # 512
lora_dropout: 0.1 # 0.05
target_modules: ['self_attn.q_proj', 'self_attn.k_proj']
dataset:
name: alpaca_clean
dataset_config:
name: alpaca
path: yahma/alpaca-cleaned
chunk_size: 1024 # 2048
concat_data: true
cache_dir: '/u/scr/nlp/data/alpaca'
pretrained_model_config:
pretrained_model_name_or_path: 'mistralai/Mistral-7B-v0.1'
cache_dir: '/juice/scr/scr110/scr/nlp/data/neo/hub/'
preprocess_config: null
dataloader:
batch_size: 1
num_workers: 2
drop_last: false
pin_memory: true
optimizer:
optim: adamw_torch_fused
lr: 0.001
weight_decay: 0.0
lr_scheduler:
lr_scheduler_type: reduce_lr_on_plateau
mode: min
factor: 0.1
patience: 10
min_lr: 0.00001
trainer: # HuggingFace Trainer-like arguments
name: distill_attention
token_reduce: true
bottom_attention_only: false
reverse_kl: false
bf16: true
train_split: train
val_split: validation
num_train_epochs: 2
gradient_accumulation_steps: 8
seed: 42
batch_size: 1
load_best_model_at_end: true
greater_is_better: false
metric_for_best_model: distill/eval/loss
logging_steps: 100
evaluation_strategy: steps
max_steps: -1
eval_steps: 100
max_eval_batches: null
dataset:
name: alpaca_clean
dataset_config:
name: alpaca
path: yahma/alpaca-cleaned
chunk_size: 1024 # 2048
concat_data: true
cache_dir: '/u/scr/nlp/data/alpaca'
pretrained_model_config:
pretrained_model_name_or_path: 'mistralai/Mistral-7B-v0.1'
cache_dir: '/juice/scr/scr110/scr/nlp/data/neo/hub/'
preprocess_config: null
dataloader:
batch_size: 1
num_workers: 2
drop_last: false
pin_memory: true
optimizer:
optim: adamw_torch_fused
lr: 1e-4
weight_decay: 0.0
lr_scheduler:
lr_scheduler_type: reduce_lr_on_plateau
mode: min
factor: 0.1
patience: 10
min_lr: 0.00001
trainer: # HuggingFace Trainer-like arguments
name: default
bf16: true
train_split: train
val_split: validation
num_train_epochs: 2
gradient_accumulation_steps: 8
seed: 42
batch_size: 1
load_best_model_at_end: true
greater_is_better: false
metric_for_best_model: eval/loss # eval/rouge/geometric_mean
logging_steps: 100
evaluation_strategy: steps
max_steps: -1
eval_steps: 100
max_eval_batches: null
finetune:
method: lora
kwargs:
r: 8
lora_alpha: 16 # 32
lora_dropout: 0 # 0.05
target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']
```
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davzoku/frankencria-llama2-11b-v1.3-m.1 | davzoku | 2024-02-14T13:29:27Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"llama-2",
"en",
"dataset:mlabonne/CodeLlama-2-20k",
"arxiv:2312.15166",
"base_model:davzoku/cria-llama2-7b-v1.3",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T12:32:00Z | ---
inference: false
language: en
license: llama2
model_type: llama
datasets:
- mlabonne/CodeLlama-2-20k
pipeline_tag: text-generation
base_model:
- davzoku/cria-llama2-7b-v1.3
library_name: transformers
tags:
- mergekit
- merge
- llama-2
---
# FrankenCRIA v1.3-m.1
## What is FrankenCRIA?
<p align="center">
<img src="https://github.com/davzoku/cria/blob/main/assets/frankencria-icon-512x512.png?raw=true" width="300" height="300" alt="FrankenCRIA Logo"> <br>
<i>This is a frankenmerge of <a href="https://huggingface.co/davzoku/cria-llama2-7b-v1.3">davzoku/cria-llama2-7b-v1.3</a>.</i>
</p>
The configuration is the same as [Undi95/Mistral-11B-v0.1](https://huggingface.co/Undi95/Mistral-11B-v0.1), [mlabonne/FrankenBeagle14-11B](https://huggingface.co/mlabonne/FrankenBeagle14-11B) and the DUS technique used in [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0).
Please be aware that this model is highly experimental, and no further training has been conducted following the merge.
Therefore, the model performance may not meet expectations, as described in the [SOLAR paper](https://arxiv.org/abs/2312.15166)
## 📦 FrankenCRIA Model Release
FrankenCRIA v1.3 comes with several variants.
- [davzoku/frankencria-llama2-11b-v1.3-m.1](https://huggingface.co/davzoku/frankencria-llama2-11b-v1.3-m.1): 11B FrankenMerge inspired by [Undi95/Mistral-11B-v0.1](https://huggingface.co/Undi95/Mistral-11B-v0.1)
- [davzoku/frankencria-llama2-11b-v1.3-m.2](https://huggingface.co/davzoku/frankencria-llama2-12.5b-v1.3-m.2): 12.5B interleaving FrankenMerge inspired by [vilm/vinallama-12.5b-chat-DUS](https://huggingface.co/vilm/vinallama-12.5b-chat-DUS)
## 🧩 Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [davzoku/cria-llama2-7b-v1.3](https://huggingface.co/davzoku/cria-llama2-7b-v1.3)
### Configuration
The following YAML configuration was used to produce this model.
```yaml
# https://huggingface.co/Undi95/Mistral-11B-v0.1
slices:
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [0, 24]
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
|
fzzhang/Marcoroni-neural-chat-7B-v2_gsm8k_merged_s | fzzhang | 2024-02-16T20:54:51Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:gsm8k",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-16T13:15:28Z | ---
library_name: transformers
license: apache-2.0
datasets:
- gsm8k
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kquant03/NeuralTrix-7B-dpo-relaser | Kquant03 | 2024-02-24T16:31:02Z | 739 | 2 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/OmniBeagle-7B",
"flemmingmiguel/MBX-7B-v3",
"AiMavenAi/AiMaven-Prometheus",
"base_model:mlabonne/OmniBeagle-7B",
"base_model:flemmingmiguel/MBX-7B-v3",
"base_model:AiMavenAi/AiMaven-Prometheus",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T07:55:29Z | ---
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/OmniBeagle-7B
- flemmingmiguel/MBX-7B-v3
- AiMavenAi/AiMaven-Prometheus
base_model:
- mlabonne/OmniBeagle-7B
- flemmingmiguel/MBX-7B-v3
- AiMavenAi/AiMaven-Prometheus
license: apache-2.0
---
# NeuralTrix-7B-v1
NeuralTrix-7B-v1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B)
* [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3)
* [AiMavenAi/AiMaven-Prometheus](https://huggingface.co/AiMavenAi/AiMaven-Prometheus)
It was then trained with DPO using:
* https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: mlabonne/OmniBeagle-7B
parameters:
density: 0.65
weight: 0.4
- model: flemmingmiguel/MBX-7B-v3
parameters:
density: 0.6
weight: 0.35
- model: AiMavenAi/AiMaven-Prometheus
parameters:
density: 0.6
weight: 0.35
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/NeuralTrix-7B-v1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Azure99/blossom-v4-qwen1_5-4b | Azure99 | 2024-02-20T02:42:07Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"zh",
"en",
"dataset:Azure99/blossom-chat-v2",
"dataset:Azure99/blossom-math-v3",
"dataset:Azure99/blossom-wizard-v2",
"dataset:Azure99/blossom-orca-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-19T13:21:02Z | ---
license: apache-2.0
datasets:
- Azure99/blossom-chat-v2
- Azure99/blossom-math-v3
- Azure99/blossom-wizard-v2
- Azure99/blossom-orca-v2
language:
- zh
- en
pipeline_tag: text-generation
---
# **BLOSSOM-v4-qwen1_5-4b**
[💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/)
### 介绍
Blossom是一个对话式语言模型,基于Qwen1.5-4B预训练模型,在Blossom Orca/Wizard/Chat/Math混合数据集上进行指令精调得来。Blossom拥有强大的通用能力及上下文理解能力,此外,训练使用的高质量中英文数据集也进行了开源。
训练分为两阶段,第一阶段使用100K Wizard、100K Orca、20K Math单轮指令数据集,训练1个epoch;第二阶段使用50K Blossom chat多轮对话数据集、以及上一阶段中随机采样2%的数据,训练3个epoch。
### 推理
推理采用对话续写的形式。
单轮对话
```
A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: 你好
|Bot|:
```
多轮对话
```
A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: 你好
|Bot|: 你好,有什么我能帮助你的?<|endoftext|>
|Human|: 介绍下中国的首都吧
|Bot|:
```
注意:在历史对话的Bot输出结尾,拼接一个<|endoftext|> |
yam-peleg/Experiment20-7B | yam-peleg | 2024-02-20T03:22:38Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-20T02:33:20Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
---
**Experiment20-7B**
An experiment for testing and refining a specific training and evaluation pipeline research framework.
This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance.
The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs.
The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
More details in the future experiments.
---
license: apache-2.0
--- |
CultriX/MonaTrix-v4 | CultriX | 2024-02-20T23:13:20Z | 739 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralMaxime-7B-slerp",
"eren23/ogno-monarch-jaskier-merge-7b",
"eren23/dpo-binarized-NeutrixOmnibe-7B",
"base_model:Kukedlc/NeuralMaxime-7B-slerp",
"base_model:eren23/ogno-monarch-jaskier-merge-7b",
"base_model:eren23/dpo-binarized-NeutrixOmnibe-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-20T11:45:19Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralMaxime-7B-slerp
- eren23/ogno-monarch-jaskier-merge-7b
- eren23/dpo-binarized-NeutrixOmnibe-7B
base_model:
- Kukedlc/NeuralMaxime-7B-slerp
- eren23/ogno-monarch-jaskier-merge-7b
- eren23/dpo-binarized-NeutrixOmnibe-7B
license: apache-2.0
---
# MonaTrix-v4
MonaTrix-v4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuralMaxime-7B-slerp](https://huggingface.co/Kukedlc/NeuralMaxime-7B-slerp)
* [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b)
* [eren23/dpo-binarized-NeutrixOmnibe-7B](https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: Kukedlc/NeuralMaxime-7B-slerp
#Emphasize the beginning of Vicuna format models
parameters:
weight: 0.36
density: 0.65
- model: eren23/ogno-monarch-jaskier-merge-7b
parameters:
weight: 0.34
density: 0.6
# Vicuna format
- model: eren23/dpo-binarized-NeutrixOmnibe-7B
parameters:
weight: 0.3
density: 0.6
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/MonaTrix-v4"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
aloobun/Reyna-CoT-4B-v0.1 | aloobun | 2024-02-23T07:35:19Z | 739 | 5 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"finetune",
"synthetic data",
"custom_code",
"COT",
"conversational",
"dataset:kaist-ai/CoT-Collection",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-23T03:02:50Z | ---
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat/raw/main/LICENSE
library_name: transformers
license: other
tags:
- finetune
- synthetic data
- custom_code
- qwen2
- COT
datasets:
- kaist-ai/CoT-Collection
---

- Finetuned [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B), on variety of CoT tasks including Reasoning, Closed Book Question Answering, Ethics, and more.
- Datasets : Curated from - [kaist-ai/CoT-Collection](https://huggingface.co/datasets/kaist-ai/CoT-Collection), [euclaise/TinyCoT](https://huggingface.co/datasets/euclaise/TinyCoT) and a very small subset from [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5).
- This marks the fourth model in this series. This experiment aims to improve Chain of Thought (CoT) capabilities on smaller language models.
- I may rerun the finetuning experiment(with a more balanced dataset), using an iterative rationale-bootstrapping procedure inspired by euclaise/Memphis-CoT-3B.
- Hyperparameter: adamw with eps of 1e-8, cosine decay with 20% warmup, lr=2e-5
## Benchamrks:
WIP
## Example:
```
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, StoppingCriteria
import torch
class MyStoppingCriteria(StoppingCriteria):
def __init__(self, target_sequence, prompt):
self.target_sequence = target_sequence
self.prompt=prompt
def __call__(self, input_ids, scores, **kwargs):
generated_text = tokenizer.decode(input_ids[0])
generated_text = generated_text.replace(self.prompt,'')
if self.target_sequence in generated_text:
return True
return False
def __len__(self):
return 1
def __iter__(self):
yield self
modelpath="aloobun/Reyna-CoT-4B-v0.1"
model = AutoModelForCausalLM.from_pretrained(
modelpath,
torch_dtype=torch.bfloat16,
device_map="cuda",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
modelpath,
trust_remote_code=True,
use_fast=False,
)
prompt = "Avery opens a flower shop. She ties 8 bunches of flowers with 9 flowers in each bunch. How many bunches would she have if she put 12 flowers in each bunch instead?\n"
encoded_input = tokenizer(prompt, return_tensors='pt')
input_ids=encoded_input['input_ids'].cuda()
streamer = TextStreamer(tokenizer=tokenizer, skip_prompt=True)
op = model.generate(
input_ids,
streamer=streamer,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.6,
top_p=0.8,
max_new_tokens=512,
stopping_criteria=MyStoppingCriteria("<|endoftext|>", prompt)
)
```
## Output:
>She would have 8 x 9 = 72 flowers in total.
>She would have 72 / 12 = 6 bunches of flowers with 12 flowers in each bunch.
>Therefore, the answer is 6.<|endoftext|> |
vishnukv/speechless-mistral-dolphin-orca-platypus-samantha-WestSeverusJaskier-7b | vishnukv | 2024-03-05T10:11:11Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:vishnukv/WestSeverusJaskier",
"base_model:uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-05T10:03:48Z | ---
base_model:
- vishnukv/WestSeverusJaskier
- uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b
library_name: transformers
tags:
- mergekit
- merge
license: mit
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [vishnukv/WestSeverusJaskier](https://huggingface.co/vishnukv/WestSeverusJaskier)
* [uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b) |
Eric111/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B | Eric111 | 2024-03-06T21:56:59Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106",
"eren23/dpo-binarized-NeuralTrix-7B",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-06T21:53:07Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106
- eren23/dpo-binarized-NeuralTrix-7B
---
# MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B
MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106](https://huggingface.co/Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106)
* [eren23/dpo-binarized-NeuralTrix-7B](https://huggingface.co/eren23/dpo-binarized-NeuralTrix-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106
layer_range: [0, 32]
- model: eren23/dpo-binarized-NeuralTrix-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
ChaoticNeutrals/Eris_Remix_DPO_7B | ChaoticNeutrals | 2024-03-07T02:14:35Z | 739 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-07T01:22:05Z | ---
base_model: []
library_name: transformers
license: other
language:
- en
---

# Jeitral: "Eris, the Greek goddess of chaos and discord."
Notes: Model should be excellent for both RP/Chat related tasks. Seems to be working in both Alpaca/Chatml.
Collaborative effort from both @Jeiku and @Nitral involving what we currently felt were our best individual projects.
We hope you enjoy! - The Chaotic Neutrals.
# Remix with DPO: https://huggingface.co/datasets/athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
Trained for 200 steps/ 1 epoch
Base model used: https://huggingface.co/ChaoticNeutrals/Eris_Remix_7B
|
Gille/StrangeMerges_33-7B-slerp | Gille | 2024-03-07T12:43:42Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"ZySec-AI/ZySec-7B-v1",
"Gille/StrangeMerges_30-7B-slerp",
"conversational",
"base_model:ZySec-AI/ZySec-7B-v1",
"base_model:Gille/StrangeMerges_30-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-07T11:58:05Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- ZySec-AI/ZySec-7B-v1
- Gille/StrangeMerges_30-7B-slerp
base_model:
- ZySec-AI/ZySec-7B-v1
- Gille/StrangeMerges_30-7B-slerp
---
# StrangeMerges_33-7B-slerp
StrangeMerges_33-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [ZySec-AI/ZySec-7B-v1](https://huggingface.co/ZySec-AI/ZySec-7B-v1)
* [Gille/StrangeMerges_30-7B-slerp](https://huggingface.co/Gille/StrangeMerges_30-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: ZySec-AI/ZySec-7B-v1
layer_range: [0, 32]
- model: Gille/StrangeMerges_30-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: ZySec-AI/ZySec-7B-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 0.3, 0.5, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0.7, 0.5, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_33-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Locutusque/ChatHercules-2.5-Mistral-7B-DPO | Locutusque | 2024-03-10T04:38:10Z | 739 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T00:43:13Z | ---
library_name: transformers
tags:
- merge
license: apache-2.0
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
language:
- en
---
# ChatHercules-2.5-Mistral-7B-DPO

ChatHercules-2.5-Mistral-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Locutusque/Hercules-2.5-Mistral-7B](https://huggingface.co/Locutusque/Hercules-2.5-Mistral-7B)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
I then use DPO to fine-tune the product of the merge on 20% of argilla/distilabel-intel-orca-dpo-pairs
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Locutusque/Hercules-2.5-Mistral-7B
layer_range: [0, 32]
- model: openchat/openchat-3.5-0106
layer_range: [0, 32]
merge_method: slerp
base_model: Locutusque/Hercules-2.5-Mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "hydra-project/ChatHercules-2.5-Mistral-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Evaluation results


|
automerger/Experiment28Yam-7B | automerger | 2024-03-10T17:57:31Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:mayacinka/yam-jom-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T13:18:49Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- mayacinka/yam-jom-7B-slerp
---
# Experiment28Yam-7B
Experiment28Yam-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [mayacinka/yam-jom-7B-slerp](https://huggingface.co/mayacinka/yam-jom-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: yam-peleg/Experiment28-7B
# No parameters necessary for base model
- model: mayacinka/yam-jom-7B-slerp
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: yam-peleg/Experiment28-7B
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment28Yam-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Abhaykoul/HelpingAI-Lite-4x1b | Abhaykoul | 2024-03-19T06:30:10Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"mergekit",
"vortexmergekit",
"OEvortex/HelpingAI-unvelite",
"OEvortex/lite-hermes",
"OEvortex/HelpingAI-Lite",
"OEvortex/HelpingAI-Lite-1.5T",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T14:47:47Z | ---
license: apache-2.0
inference: false
tags:
- moe
- mergekit
- vortexmergekit
- OEvortex/HelpingAI-unvelite
- OEvortex/lite-hermes
- OEvortex/HelpingAI-Lite
- OEvortex/HelpingAI-Lite-1.5T
---
# HelpingAI-Lite-4x1b
Hey there! 👋 Welcome to the HelpingAI-Lite-4x1b! This is a MOE brought together using the awesome [VortexMerge kit](https://colab.research.google.com/drive/1YjcvCLuNG1PK7Le6_4xhVU5VpzTwvGhk#scrollTo=UG5H2TK4gVyl).
Let's see what we've got in this model:
* [OEvortex/HelpingAI-unvelite](https://huggingface.co/OEvortex/HelpingAI-unvelite) 🚀
* [OEvortex/lite-hermes](https://huggingface.co/OEvortex/lite-hermes) 🚀
* [OEvortex/HelpingAI-Lite](https://huggingface.co/OEvortex/HelpingAI-Lite) 🚀
* [OEvortex/HelpingAI-Lite-1.5T](https://huggingface.co/OEvortex/HelpingAI-Lite-1.5T) 🚀
## 🧩 Configuration
```yaml
base_model: OEvortex/HelpingAI-Lite
dtype: float16
gate_mode: cheap_embed
experts:
- source_model: OEvortex/HelpingAI-unvelite
positive_prompts: ["You are an helpful unbounded, uncensored assistant."]
- source_model: OEvortex/lite-hermes
positive_prompts: ["You are helpful assistant."]
- source_model: OEvortex/HelpingAI-Lite
positive_prompts: ["You are a coding assistant."]
- source_model: OEvortex/HelpingAI-Lite-1.5T
positive_prompts: ["You are helpful and general-purpose assistant."]
|
eren23/Experiment26-12B | eren23 | 2024-03-13T08:39:22Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"yam-peleg/Experiment26-7B",
"en",
"base_model:yam-peleg/Experiment26-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-13T08:29:12Z | ---
tags:
- merge
- mergekit
- lazymergekit
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment26-7B
base_model:
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment26-7B
license: cc-by-nc-4.0
language:
- en
---
# Experiment26-12B
Experiment26-12B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
## 🧩 Configuration
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 8]
model: yam-peleg/Experiment26-7B
- sources:
- layer_range: [4, 12]
model: yam-peleg/Experiment26-7B
- sources:
- layer_range: [8, 16]
model: yam-peleg/Experiment26-7B
- sources:
- layer_range: [12, 20]
model: yam-peleg/Experiment26-7B
- sources:
- layer_range: [16, 24]
model: yam-peleg/Experiment26-7B
- sources:
- layer_range: [20, 28]
model: yam-peleg/Experiment26-7B
- sources:
- layer_range: [24, 32]
model: yam-peleg/Experiment26-7B
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "eren23/Experiment26-12B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
yam-peleg/gemma-7b-experiment | yam-peleg | 2024-03-16T14:21:18Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"chat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-16T13:42:33Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
---
**gemma-7b-experiment**
**This is just an experiment placeholder for testing out local validation strategy, there is absolutly no real reason for you to try this model. it has nothing new into it.**
An experiment for testing and refining a local cross validation strategy.
The goal is to evaluate LLMs locally and make sure the scores obtained locally can be reproduced publiclly.
More details coming soon.
---
license: apache-2.0
--- |
Chickaboo/ChickaQ-V2-Large-Beta | Chickaboo | 2024-03-17T16:55:45Z | 739 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:vilm/Quyen-mini-v0.1",
"base_model:Qwen/Qwen1.5-1.8B-Chat",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-17T05:18:50Z | ---
base_model:
- vilm/Quyen-mini-v0.1
- Qwen/Qwen1.5-1.8B-Chat
library_name: transformers
tags:
- mergekit
- merge
license: mit
---
# Models in the ChickaQ family
- **ChickaQ (0.5B)**
- **ChickaQ-Large (1.8B)**
- **ChickaQ-V2-Beta (0.9B)**
- **ChickaQ-V2-Large-Beta (3B)**
# mergedmodel
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [vilm/Quyen-mini-v0.1](https://huggingface.co/vilm/Quyen-mini-v0.1)
* [Qwen/Qwen1.5-1.8B-Chat](https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Qwen/Qwen1.5-1.8B-Chat
layer_range: [0, 24]
- sources:
- model: vilm/Quyen-mini-v0.1
layer_range: [0, 24]
merge_method: passthrough
dtype: bfloat16
```
### License
---
license: mit
license_link: [mit](https://mit-license.org/)
--- |
mvpmaster/NeuralDareDMistralPro-7b-slerp | mvpmaster | 2024-03-19T23:59:52Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralDaredevil-7B",
"NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:mlabonne/NeuralDaredevil-7B",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-18T23:52:49Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralDaredevil-7B
- NousResearch/Hermes-2-Pro-Mistral-7B
base_model:
- mlabonne/NeuralDaredevil-7B
- NousResearch/Hermes-2-Pro-Mistral-7B
---
# NeuralDareDMistralPro-slerp
NeuralDareDMistralPro-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralDaredevil-7B
layer_range: [0, 32]
- model: NousResearch/Hermes-2-Pro-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralDaredevil-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mvpmaster/NeuralDareDMistralPro-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ontocord/Felix-8B | ontocord | 2024-03-26T03:48:14Z | 739 | 25 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-23T10:54:30Z | ---
license: apache-2.0
language:
- en
---
March 23, 2024
# Felix-8B: A model built with lawfulness alignment
Felix-8B is an experimental language model developed by Ontocord.ai, specializing in addressing lawfulness concerns under the Biden-Harris Executive Order on AI and the principles of the EU AI Act. This model has achieved one of the highest scores on the TruthfulQA benchmark compared to models of its size, showcasing its exceptional performance in providing accurate and reliable responses.
Felix-8B is **experimental and a research work product** and a DPO reinforcement learning version of [ontocord/sft-4e-exp2](https://huggingface.co/ontocord/sft-4e-exp2) which in turn is a fine-tuned version of [TencentARC/Mistral_Pro_8B_v0.1](https://huggingface.co/TencentARC/Mistral_Pro_8B_v0.1).
**If you find any issues please let us know in the Community section so we can improve the model.**

## Model Description
Felix-8B is an 8 billion parameter language model trained using Ontocord.ai's proprietary auto-purpleteaming technique. The model has been fine-tuned and optimized using synthetic data, with the goal of improving its robustness and ability to handle a wide range of tasks while maintaining a strong focus on safety and truthfulness.
## Evaluation
| arc (25) | hellaswag (10) | mmlu (5) | truthfulqa_mc (20) | winogrande | gsm8k (exact) | avg (exact) |
|:--------:|:--------------:|:--------:|:------------------:|:----------:|:-------------:|:-----------:|
| 0.6519 | 0.8474 | 0.6057 | 0.638 | 0.764 | 0.5997 | 0.68445 |
Felix-8B has been extensively evaluated on various benchmarks, including the TruthfulQA benchmark, where it has achieved one of the highest scores among models of its size.
We also confirmed that the model increased its helpfulness during DPO, which is very encouraging as we did not specifically target helpfulness, but rather harmlessness.
More evaluations will be provided as we further continue our research.
## Redteaming Considerations
Our redteam focuses on specific areas of concern under the executive order, including:
* Harm to oneself or others.
* Requests on how to create cyber-attacks.
* Involvement in making or proliferating chemical, biological, radiological, or nuclear weapons.
* Participation in any illegal act.
* Infringement of privacy or rights.
* Attempts to circumvent redteam controls.
Our redteam focuses on the potential coverage of the EU AI Act, including:
* Misinformation
* Social Scoring
* Emotion Monitoring and Control in Schools and Workplaces
* Predictive Policing
* Classification of People Based on Sensitive Characteristics
We also address traditional concerns related to AI Safety:
* Toxic language
* Fairness
* Stereotyping and Discrimination
* Adversarial Prompting
* Out of Distribution Robustness
* Machine Ethics
* Avoidance of unqualified professional advice.
## Dataset
The SFT model [ontocord/sft-4e-exp2](https://huggingface.co/ontocord/sft-4e-exp2) was trained with a subset of the execellent [OpenHermes 2.5 dataset](https://huggingface.co/datasets/teknium/OpenHermes-2.5).
The DPO reinforcement learning uses an Ontocord synthetic auto-purpletteaming dataset. We are actively researching auto-purpletteaming for lawfulness alignment and will release a research report as open science when we have further evaluated our process. An example of our dataset entry is as follows:
```
<|im_start|>user
Explain it simply for someone who doesn't understand much about the topic. Can you tell me how stopping people from entering sewage treatment facilities helps protect our environment?
<|im_end|>
<|im_start|>assistant
Sure! Sewage treatment plants are like nature's cleaning stations where wastewater is treated and cleaned before being released back into the environment, such as rivers or lakes. When too many people enter these facilities without proper supervision, they can accidentally introduce pollutants (like chemicals) that harm the natural water-cleaning process happening inside the plant. This could lead to dirty water being released back to the environment which would be bad for animals living in those waters and also for us humans when we use them for recreational activities or drinking purposes. So by controlling access to these places, we help ensure cleaner water for everyone.
```
## Limitations and Biases
While Felix-8B has been designed with safety and truthfulness as its primary focus, it is essential to acknowledge that no language model is entirely free from biases or limitations. Users should be aware that the model's outputs may be influenced by the data it was trained on and the inherent biases present in the training data.
Additionally, Felix-8B's performance may vary depending on the specific task and domain. Users should exercise caution when applying the model to sensitive or high-stakes situations and always verify the model's outputs with human oversight.
Felix-8B is **experimental and a research work product**. We have found for example, the model may be overly verbose which may be due to our SFT or the TencentARC/Mistral_Pro_8B_v0.1 model. Also, we are actively working on improving the training dataset. We have for example discovered some answers are too short in our dataset, or may have typos due to our use of a quantized generative model to create synthetic triples for DPO.
## Ethical Considerations
Ontocord.ai is committed to developing AI technologies that prioritize lawfulness and equal access. Felix-8B has been designed with these principles in mind, and efforts have been made to mitigate potential risks and biases during the training process.
However, users should be aware of the potential misuse of language models and ensure that Felix-8B is used in compliance with ethical guidelines and applicable laws and regulations.
## How to Use
To use Felix-8B, you can access the model through the Hugging Face Model Hub:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("ontocord/Felix-8B")
model = AutoModelForCausalLM.from_pretrained("ontocord/Felix-8B")
```
Instruction format:
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
```
## Acknowledgement
We want to thank [Teknium](https://huggingface.co/teknium) and [Nous Research](https://huggingface.co/NousResearch) for the excellent work in performant models. We would also like to thank our friends from [DFKI](https://www.dfki.de/en/web/research) and our friends Simone Tedeschi and Felix Friedrich for discussions on auto-purpleteaming.
## Citation
To cite Felix-8B, please use:
```
@misc{ontocord2024Felix-8B,
title={Felix-8B: A model built with lawfulness alignment },
author={Quan Nguyen, Vincent Hines, Huu Nguyen, Patrick Schramowski},
year={2024},
}
```
** We will update the contributor list as we further develop this research.
|
Eric111/UltraCatunaMayo | Eric111 | 2024-03-23T14:15:03Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/UltraMerge-7B",
"Eric111/CatunaMayo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-23T14:11:46Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/UltraMerge-7B
- Eric111/CatunaMayo
---
# UltraCatunaMayo
UltraCatunaMayo is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [mlabonne/UltraMerge-7B](https://huggingface.co/mlabonne/UltraMerge-7B)
* [Eric111/CatunaMayo](https://huggingface.co/Eric111/CatunaMayo)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/UltraMerge-7B
layer_range: [0, 32]
- model: Eric111/CatunaMayo
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/UltraMerge-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Josephgflowers/Tinyllama-320M-Cinder-v1 | Josephgflowers | 2024-03-26T02:45:13Z | 739 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T20:34:40Z | ---
license: mit
widget:
- text: '<|user|>
Can you tell me a space adventure story?</s>
<|assistant|>'
---
Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration.
This model was built by using a 220M tinyllama and depth upscalling and retraining with the Cinder dataset. |
nbeerbower/bruphin-kappa | nbeerbower | 2024-03-24T23:44:34Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:nbeerbower/bruphin-iota",
"base_model:nbeerbower/bruphin-epsilon",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T23:41:15Z | ---
license: apache-2.0
base_model:
- nbeerbower/bruphin-iota
- nbeerbower/bruphin-epsilon
library_name: transformers
tags:
- mergekit
- merge
---
# bruphin-kappa
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/bruphin-iota](https://huggingface.co/nbeerbower/bruphin-iota)
* [nbeerbower/bruphin-epsilon](https://huggingface.co/nbeerbower/bruphin-epsilon)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/bruphin-iota
layer_range: [0, 32]
- model: nbeerbower/bruphin-epsilon
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/bruphin-epsilon
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
mradermacher/iris-7b-GGUF | mradermacher | 2024-05-06T05:52:26Z | 739 | 0 | transformers | [
"transformers",
"gguf",
"finetuned",
"en",
"ko",
"base_model:davidkim205/iris-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-27T19:16:18Z | ---
base_model: davidkim205/iris-7b
language:
- en
- ko
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- finetuned
---
## About
static quants of https://huggingface.co/davidkim205/iris-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/iris-7b-GGUF/resolve/main/iris-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LeroyDyer/Mixtral_AI_Cyber_3.0 | LeroyDyer | 2024-04-09T16:24:51Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"code",
"Cyber-Series",
"conversational",
"en",
"arxiv:2306.01708",
"base_model:LeroyDyer/Mixtral_AI_Cyber_2.0",
"base_model:LeroyDyer/Mixtral_AI_Cyber_1.0",
"base_model:LeroyDyer/Mixtral_Chat_X_128k",
"base_model:LeroyDyer/Mixtral_instruct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-29T09:40:56Z | ---
base_model:
- LeroyDyer/Mixtral_AI_Cyber_2.0
- LeroyDyer/Mixtral_AI_Cyber_1.0
- LeroyDyer/Mixtral_Chat_X_128k
- LeroyDyer/Mixtral_instruct
library_name: transformers
tags:
- mergekit
- merge
- code
- Cyber-Series
license: mit
language:
- en
---
A Merge of the top models from this repo! with models like this .... why third party !
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [LeroyDyer/Mixtral_AI_Cyber_2.0](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_2.0) as a base.
### Models Merged
The following models were included in the merge:
* [LeroyDyer/Mixtral_AI_Cyber_1.0](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_1.0)
* [LeroyDyer/Mixtral_Chat_X_128k](https://huggingface.co/LeroyDyer/Mixtral_Chat_X_128k)
* [LeroyDyer/Mixtral_instruct](https://huggingface.co/LeroyDyer/Mixtral_instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: LeroyDyer/Mixtral_AI_Cyber_1.0
parameters:
density: [1, 0.721, 0.1] # density gradient
weight: 1.0
- model: LeroyDyer/Mixtral_instruct
parameters:
density: 0.5
weight: [0, 0.3, 0.437, 1] # weight gradient
- model: LeroyDyer/Mixtral_Chat_X_128k
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: LeroyDyer/Mixtral_AI_Cyber_2.0
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
Kukedlc/Neural-4-QA-7b | Kukedlc | 2024-04-06T03:04:07Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"yam-peleg/Experiment21-7B",
"CultriX/NeuralTrix-bf16",
"louisgrc/Montebello_7B_SLERP",
"CorticalStack/pastiche-crown-clown-7b-dare-dpo",
"chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"base_model:yam-peleg/Experiment21-7B",
"base_model:CultriX/NeuralTrix-bf16",
"base_model:louisgrc/Montebello_7B_SLERP",
"base_model:CorticalStack/pastiche-crown-clown-7b-dare-dpo",
"base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-30T03:46:36Z | ---
tags:
- merge
- mergekit
- lazymergekit
- yam-peleg/Experiment21-7B
- CultriX/NeuralTrix-bf16
- louisgrc/Montebello_7B_SLERP
- CorticalStack/pastiche-crown-clown-7b-dare-dpo
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
base_model:
- yam-peleg/Experiment21-7B
- CultriX/NeuralTrix-bf16
- louisgrc/Montebello_7B_SLERP
- CorticalStack/pastiche-crown-clown-7b-dare-dpo
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
license: apache-2.0
---
# Neural-4-QA-7b
Neural-4-QA-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [yam-peleg/Experiment21-7B](https://huggingface.co/yam-peleg/Experiment21-7B)
* [CultriX/NeuralTrix-bf16](https://huggingface.co/CultriX/NeuralTrix-bf16)
* [louisgrc/Montebello_7B_SLERP](https://huggingface.co/louisgrc/Montebello_7B_SLERP)
* [CorticalStack/pastiche-crown-clown-7b-dare-dpo](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare-dpo)
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO)
## 🧩 Configuration
```yaml
models:
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
# No parameters necessary for base model
- model: yam-peleg/Experiment21-7B
parameters:
density: 0.66
weight: 0.2
- model: CultriX/NeuralTrix-bf16
parameters:
density: 0.55
weight: 0.2
- model: louisgrc/Montebello_7B_SLERP
parameters:
density: 0.55
weight: 0.2
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
parameters:
density: 0.44
weight: 0.2
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
parameters:
density: 0.66
weight: 0.2
merge_method: dare_ties
base_model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/Neural-4-QA-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
jisukim8873/mistral-7B-alpaca-case-0-2 | jisukim8873 | 2024-04-01T04:27:05Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-01T02:38:18Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chrischain/SatoshiNv5 | chrischain | 2024-04-02T03:19:23Z | 739 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finance",
"legal",
"biology",
"art",
"en",
"license:cc-by-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-01T21:04:58Z | ---
license: cc-by-2.0
language:
- en
tags:
- finance
- legal
- biology
- art
---
Behold, one of the first fine-tunes of Mistral's 7B 0.2 Base model. SatoshiN is trained on 4 epochs 2e-4 learning rate (cosine) of a diverse custom data-set, combined with a polishing round of that same data-set at a 1e-4 linear learning rate.
It's a nice assistant that isn't afraid to ask questions, and gather additional information before providing a response to user prompts.
SatoshiN | Base-Model
Wikitext Perplexity: 6.27 | 5.4
**Similar to SOTA, this model runs a bit hot, try using lower temperatures below .5 if experiencing any nonsense)
|
Locutusque/Hercules-4.0-Yi-34B | Locutusque | 2024-04-07T15:27:48Z | 739 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"science",
"biology",
"chemistry",
"not-for-all-audiences",
"conversational",
"en",
"dataset:Locutusque/hercules-v4.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-02T10:39:11Z | ---
library_name: transformers
tags:
- medical
- science
- biology
- chemistry
- not-for-all-audiences
license: apache-2.0
datasets:
- Locutusque/hercules-v4.0
language:
- en
---
# Hercules-4.0-Yi-34B

## Model Description
Hercules-4.0-Yi-34B is a fine-tuned language model derived from Yi-34B language model. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains. The dataset used for fine-tuning, also named Hercules-v4.0, expands upon the diverse capabilities of OpenHermes-2.5 with contributions from numerous curated datasets. This fine-tuning has hercules-v4.0 with enhanced abilities in:
- Complex Instruction Following: Understanding and accurately executing multi-step instructions, even those involving specialized terminology.
- Function Calling: Seamlessly interpreting and executing function calls, providing appropriate input and output values.
- Domain-Specific Knowledge: Engaging in informative and educational conversations about Biology, Chemistry, Physics, Mathematics, Medicine, Computer Science, and more.
## Intended Uses & Potential Bias
Hercules-4.0-Yi-34B is well-suited to the following applications:
- Specialized Chatbots: Creating knowledgeable chatbots and conversational agents in scientific and technical fields.
- Instructional Assistants: Supporting users with educational and step-by-step guidance in various disciplines.
- Code Generation and Execution: Facilitating code execution through function calls, aiding in software development and prototyping.
**Important Note: Although Hercules-v4.0 is carefully constructed, it's important to be aware that the underlying data sources may contain biases or reflect harmful stereotypes. Use this model with caution and consider additional measures to mitigate potential biases in its responses.**
# Quants
GGUF: https://huggingface.co/bartowski/Hercules-4.0-Yi-34B-GGUF
ExLlamaV2: https://huggingface.co/bartowski/Hercules-4.0-Yi-34B-exl2
## Limitations and Risks
- Toxicity: The dataset contains toxic or harmful examples.
- Hallucinations and Factual Errors: Like other language models, Hercules-4.0-Yi-34B may generate incorrect or misleading information, especially in specialized domains where it lacks sufficient expertise.
- Potential for Misuse: The ability to engage in technical conversations and execute function calls could be misused for malicious purposes.
## Training Procedure
- This model was trained on 8 kaggle TPUs, using torch xla SPMD for high MXU efficiency. There was no expense on my end (meaning you can reproduce this too!)
- a learning rate of 1e-4 with an end factor of 0.001 was used.
- No mixed precision was used, with the default dtype being bfloat16.
- A total batch size of 64 was used.
- Trained on 75,000 examples of Hercules-v4.0
- I used LoRA to freeze about 97% of the model parameters. Quantization was not used.
- This model was trained on OpenAI's ChatML prompt format. Because this model has function calling capabilities, the prompt format is slightly different, here's what it would look like: ```<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>call\n{function call message}<|im_end|>\n<|im_start|>function\n{function response message}<|im_end|>\n<|im_start|>assistant\n{assistant message}<|endoftext|>```
This model was fine-tuned using my TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment
## Evaluation
Coming Soon |
MiniMoog/Mergerix-7b-v0.2 | MiniMoog | 2024-04-02T20:35:56Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-02T20:20:23Z | ---
tags:
- merge
- mergekit
- lazymergekit
license: apache-2.0
---
# Mergerix-7b-v0.2
Mergerix-7b-v0.2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: alpindale/Mistral-7B-v0.2-hf
- model: liminerity/M7-7b
- model: rwitz/experiment26-truthy-iter-0
merge_method: model_stock
base_model: alpindale/Mistral-7B-v0.2-hf
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MiniMoog/Mergerix-7b-v0.2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
MiniMoog/Mergerix-7b-v0.3 | MiniMoog | 2024-04-02T20:56:00Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-02T20:49:46Z | ---
tags:
- merge
- mergekit
- lazymergekit
license: apache-2.0
---
# Mergerix-7b-v0.3
Mergerix-7b-v0.3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: Gille/StrangeMerges_32-7B-slerp
- model: yam-peleg/Experiment26-7B
- model: liminerity/M7-7b
merge_method: model_stock
base_model: liminerity/M7-7b
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MiniMoog/Mergerix-7b-v0.3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Changgil/K2S3-Mistral-7b-v1.43 | Changgil | 2024-04-05T09:33:01Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-05T09:27:01Z | ---
license: cc-by-nc-4.0
language:
- en
---
---
## Developed by :
* K2S3
## Model Number:
* K2S3-Mistral-7b-v1.43
## Base Model :
* mistralai/Mistral-7B-v0.1
### Training Data
* The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset.
* 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다.
### Training Method
* This model was fine-tuned on the "mistralai/Mistral-7B-v0.1" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning).
* 이 모델은 "mistralai/Mistral-7B-v0.1" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다.
### Hardware
* Hardware: Utilized two A100 (80G*2EA) GPUs for training.
* Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp.
* 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다. |
Nitral-AI/Nyan-Stunna-7B | Nitral-AI | 2024-04-07T03:38:40Z | 739 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:arlineka/KittyNyanster-v1",
"base_model:Nitral-AI/KukulStanta-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-05T13:25:31Z | ---
base_model:
- arlineka/KittyNyanster-v1
- Nitral-AI/KukulStanta-7B
library_name: transformers
tags:
- mergekit
- merge
license: other
---

# RP "With the zoomies"
# Vision/multimodal capabilities:
Quants Found Here:
If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo.
* You can load the **mmproj** by using the corresponding section in the interface:
 |
ABX-AI/Silver-Sun-v2-11B | ABX-AI | 2024-04-09T16:20:32Z | 739 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"not-for-all-audiences",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-08T20:36:44Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
- llama
- not-for-all-audiences
license: other
---
# Silver-Sun-v2-11B

> This is an updated version of Silver-Sun-11B. The change is that now the Solstice-FKL-v2-10.7B merge uses Sao10K/Fimbulvetr-11B-v2 instead of v1.
> Additionally, the config of the original Silver-Sun was wrong, and I have also updated that.
> As expected, this is a HIGHLY uncensored model. It should perform even better than the v1 due to the updated Fimb, and the fixed config.
**Works with Alpaca, and from my tests, also ChatML. However Alpaca may be a better option. Try it out and use whatever works better for you.**
**Due to a quirk with Solar, if you want the best quality either launch at 4K context, or launch at 8K (and possibly beyond - have not tested it that high) with 4k context pre-loaded in the prompt.**
> This model is intended for fictional storytelling and writing, focusing on NSFW capabilities and lack of censorship for RP reasons.
[GGUF / IQ / Imatrix](https://huggingface.co/ABX-AI/Silver-Sun-v2-11B-GGUF-IQ-Imatrix)
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Himitsui/Kaiju-11B](https://huggingface.co/Himitsui/Kaiju-11B)
* ABX-AI/Solstice-FKL-v2-10.7B
>[!NOTE]
>A mixture of [Sao10K/Solstice-11B-v1](https://huggingface.co/Sao10K/Solstice-11B-v1) and
>[ABX-AI/Fimbulvetr-Kuro-Lotus-v2-10.7B] which is updated saishf/Fimbulvetr-Kuro-Lotus-10.7B with Fimb v2
### OpenLLM Eval Results
[Detailed Results + Failed GSM8K](https://huggingface.co/datasets/open-llm-leaderboard/details_ABX-AI__Silver-Sun-v2-11B)
>[!NOTE]
>I had to remove GSM8K from the results and manually re-average the rest. GSM8K failed due to some issue with formatting, which is not experienced during practical usage.
>By removing the GSM8K score, the average is VERY close to upstage/SOLAR-10.7B-v1.0 (74.20), which would make sense.
>Feel free to ignore the actual average and use the other scores individually for reference.
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.04|
|AI2 Reasoning Challenge (25-Shot)|69.88|
|HellaSwag (10-Shot) |87.81|
|MMLU (5-Shot) |66.74|
|TruthfulQA (0-shot) |62.49|
|Winogrande (5-shot) |83.27|
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: ./MODELS/Solstice-FKL-v2-10.7B
layer_range: [0, 48]
- model: Himitsui/Kaiju-11B
layer_range: [0, 48]
merge_method: slerp
base_model: ./MODELS/Solstice-FKL-v2-10.7B
parameters:
t:
- filter: self_attn
value: [0.6, 0.7, 0.8, 0.9, 1]
- filter: mlp
value: [0.4, 0.3, 0.2, 0.1, 0]
- value: 0.5
dtype: bfloat16
``` |
Nitral-AI/Nyanade_Stunna-Maid-7B | Nitral-AI | 2024-04-13T15:49:29Z | 739 | 8 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Nitral-AI/Nyanade-Maid-7B",
"base_model:Nitral-AI/Nyan-Stunna-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-12T01:39:15Z | ---
base_model:
- Nitral-AI/Nyanade-Maid-7B
- Nitral-AI/Nyan-Stunna-7B
library_name: transformers
tags:
- mergekit
- merge
license: other
---

# Quants thanks to Lewdiculus: https://huggingface.co/Lewdiculous/Nyanade_Stunna-Maid-7B-GGUF-IQ-Imatrix
# Vision/multimodal capabilities:
If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo.
* You can load the **mmproj** by using the corresponding section in the interface:
 |
PrunaAI/llama3-42b-v0-GGUF-smashed | PrunaAI | 2024-04-23T21:34:52Z | 739 | 1 | null | [
"gguf",
"pruna-ai",
"region:us"
] | null | 2024-04-23T12:32:20Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
## This repo contains GGUF versions of the chargoddard/llama3-42b-v0 model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/llama3-42b-v0-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/llama3-42b-v0-GGUF-smashed-smashed llama3-42b-v0.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/llama3-42b-v0-GGUF-smashed-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/llama3-42b-v0-GGUF-smashed-smashed llama3-42b-v0.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m llama3-42b-v0.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./llama3-42b-v0.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./llama3-42b-v0.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
Ramikan-BR/tinyllama-coder-py-4bit-v10 | Ramikan-BR | 2024-05-26T19:18:18Z | 739 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"code",
"lora",
"peft",
"conversational",
"en",
"dataset:Ramikan-BR/data-oss_instruct-decontaminated_python.jsonl",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-25T12:46:49Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- code
- lora
- peft
base_model: unsloth/tinyllama-chat-bnb-4bit
pipeline_tag: text-generation
datasets: Ramikan-BR/data-oss_instruct-decontaminated_python.jsonl
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **Model type:** [text-generation/Python Coder]
- **Language(s) (NLP):** [en]
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
### Model Description
<!-- Provide a longer summary of what this model is. -->
### Training Data
datasets: [Ramikan-BR/data-oss_instruct-decontaminated_python.jsonl](https://huggingface.co/datasets/Ramikan-BR/data-oss_instruct-decontaminated_python.jsonl)
### Training Procedure
The model was refined using [Unsloath](https://github.com/unslothai/unsloth). The dataset [ise-uiuc/Magicoder-OSS-Instruct-75K](https://huggingface.co/datasets/ise-uiuc/Magicoder-OSS-Instruct-75K/blob/main/data-oss_instruct-decontaminated.jsonl) was adjusted, leaving only data on python and divided into 10 parts, each refinement occurred for 2 epochs, using adafactor optimizer or adamw_8bit (adafactor seems to deliver less loss).
### Model Sources [optional]
base_model: [unsloth/tinyllama-chat-bnb-4bit](https://huggingface.co/unsloth/tinyllama-chat-bnb-4bit)
model: [Ramikan-BR/tinyllama-coder-py-4bit-v10](https://huggingface.co/Ramikan-BR/tinyllama-coder-py-4bit-v10)
gguf_f16: [tinyllama-coder-py-4bit-v10-unsloth.F16.gguf](https://huggingface.co/Ramikan-BR/tinyllama-coder-py-4bit-v10/blob/main/tinyllama-coder-py-4bit-v10-unsloth.F16.gguf)
gguf_Q4_K_M: [tinyllama-coder-py-4bit-v10-unsloth.Q4_K_M.gguf](https://huggingface.co/Ramikan-BR/tinyllama-coder-py-4bit-v10/blob/main/tinyllama-coder-py-4bit-v10-unsloth.Q4_K_M.gguf)
gguf_Q8_0: [tinyllama-coder-py-4bit-v10-unsloth.Q8_0.gguf](https://huggingface.co/Ramikan-BR/tinyllama-coder-py-4bit-v10/blob/main/tinyllama-coder-py-4bit-v10-unsloth.Q8_0.gguf)
#### Training Hyperparameters
Notebook [Unsloath](https://github.com/unslothai/unsloth) that I used for AI refinement: [TinyLlama](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)
```python
%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps xformers trl peft accelerate bitsandbytes # xformers "xformers<0.0.26"
import os
from google.colab import drive
drive.mount('/content/drive')
from unsloth import FastLanguageModel
import torch
max_seq_length = 4096 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
# 4bit pre quantized models we support for 4x faster downloading + no OOMs.
fourbit_models = [
"unsloth/mistral-7b-bnb-4bit",
"unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"unsloth/llama-2-7b-bnb-4bit",
"unsloth/llama-2-13b-bnb-4bit",
"unsloth/codellama-34b-bnb-4bit",
"unsloth/tinyllama-bnb-4bit",
"unsloth/gemma-7b-bnb-4bit", # New Google 6 trillion tokens model 2.5x faster!
"unsloth/gemma-2b-bnb-4bit",
] # More models at https://huggingface.co/unsloth
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "Ramikan-BR/tinyllama-coder-py-4bit_LORA-v9", # "unsloth/tinyllama" for 16bit loading
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
# token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)
model = FastLanguageModel.get_peft_model(
model,
r = 256, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 512,
lora_dropout = 0, # Currently only supports dropout = 0
bias = "none", # Currently only supports bias = "none"
use_gradient_checkpointing = True, # @@@ IF YOU GET OUT OF MEMORY - set to True @@@
random_state = 3407,
use_rslora = False, # We support rank stabilized LoRA
loftq_config = None, # And LoftQ
)
alpaca_prompt = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Input:
{}
### Output:
{}"""
EOS_TOKEN = tokenizer.eos_token
def formatting_prompts_func(examples):
inputs = examples["problem"]
outputs = examples["solution"]
texts = []
for input, output in zip(inputs, outputs):
# Must add EOS_TOKEN, otherwise your generation will go on forever!
text = alpaca_prompt.format(input, output) + EOS_TOKEN
texts.append(text)
return { "text" : texts}
pass
from datasets import load_dataset
dataset = load_dataset('json', data_files='/content/drive/MyDrive/data-oss_instruct-py-10.jsonl', split='train')
dataset = dataset.map(formatting_prompts_func, batched=True)
from trl import SFTTrainer
from transformers import TrainingArguments
from unsloth import is_bfloat16_supported
from transformers.utils import logging
logging.set_verbosity_info()
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 2,
packing = True, # Packs short sequences together to save time!
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 256,
warmup_ratio = 0.1,
num_train_epochs = 2,
learning_rate = 2e-4,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
logging_steps = 1,
optim = "adafactor", # adamw_torch ou adamw_torch_fused +10% velocidade ou adafactor ou adamw_8bit
weight_decay = 0.1,
lr_scheduler_type = "linear",
seed = 3407,
output_dir = "outputs",
),
)
trainer_stats = trainer.train()
model.save_pretrained("lora_model") # Local saving
tokenizer.save_pretrained("lora_model")
model.push_to_hub("Ramikan-BR/tinyllama-coder-py-4bit_LORA-v10", token = "hf_...") # Online saving
tokenizer.push_to_hub("Ramikan-BR/tinyllama-coder-py-4bit_LORA-v10", token = "hf_...") # Online saving
# Merge to 16bit
model.save_pretrained_merged("model", tokenizer, save_method = "merged_16bit",)
model.push_to_hub_merged("Ramikan-BR/tinyllama-coder-py-4bit-v10", tokenizer, save_method = "merged_16bit", token = "hf_...")
# Merge to 4bit
if False: model.save_pretrained_merged("model", tokenizer, save_method = "merged_4bit",)
if False: model.push_to_hub_merged("Ramikan-BR/tinyllama-coder-py-4bit-v10", tokenizer, save_method = "merged_4bit", token = "hf_...")
# Just LoRA adapters
if False: model.save_pretrained_merged("model", tokenizer, save_method = "lora",)
if False: model.push_to_hub_merged("Ramikan-BR/tinyllama-coder-py-4bit-v10", tokenizer, save_method = "lora", token = "hf_...")
# Save to 8bit Q8_0
model.save_pretrained_gguf("model", tokenizer,)
model.push_to_hub_gguf("Ramikan-BR/tinyllama-coder-py-4bit-v10", tokenizer, token = "hf_...")
# Save to 16bit GGUF
model.save_pretrained_gguf("model", tokenizer, quantization_method = "f16")
model.push_to_hub_gguf("Ramikan-BR/tinyllama-coder-py-4bit-v10", tokenizer, quantization_method = "f16", token = "hf_...")
# Save to q4_k_m GGUF
model.save_pretrained_gguf("model", tokenizer, quantization_method = "q4_k_m")
model.push_to_hub_gguf("Ramikan-BR/tinyllama-coder-py-4bit-v10", tokenizer, quantization_method = "q4_k_m", token = "hf_...")
Loss for 5 epochs in the last training session of the last part of the dataset:
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
\\ /| Num examples = 407 | Num Epochs = 5
O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 256
\ / Total batch size = 512 | Total steps = 5
"-____-" Number of trainable parameters = 201,850,880
[5/5 29:36, Epoch 3/5]
Step Training Loss
1 0.568000
2 0.145300
3 0.506100
4 0.331900
5 0.276100
Quick test 1 after training the last part of the dataset:
# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
alpaca_prompt.format(
"Continue the fibonnaci sequence.", # instruction
"1, 1, 2, 3, 5, 8", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
AI Response: ['<s> Below is an instruction that describes a task. Write a response that appropriately completes the request.\n### Input:\nContinue the fibonnaci sequence.\n\n### Output:\n1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 420, 787, 1444, 2881, 4765, 8640']
Quick test 2 after training the last part of the dataset:
# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
alpaca_prompt.format(
"Continue the fibonnaci sequence.", # instruction
"1, 1, 2, 3, 5, 8", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)
AI Response: <s> Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Input:
Continue the fibonnaci sequence.
### Output:
1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 420, 787, 1444, 2881, 4765, 8640, 17281, 31362, 65325, 128672, 251345, 410000, 720000, 1280000,
Quick test 3 after training the last part of the dataset:
if False:
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "lora_model", # YOUR MODEL YOU USED FOR TRAINING
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
# alpaca_prompt = You MUST copy from above!
inputs = tokenizer(
[
alpaca_prompt.format(
"What is a famous tall tower in Paris?", # instruction
"", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 64)
AI Response: <s> Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Input:
What is a famous tall tower in Paris?
### Output:
The famous tall tower in Paris is the Eiffel Tower. It is a 300-meter-tall steel tower located in the heart of Paris, France. The tower was built in 18892 and is a popular tourist attraction. It is also a symbol of the city
outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)
```
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stablediffusionapi/anime-xl-v8 | stablediffusionapi | 2024-06-19T07:25:16Z | 739 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-06-19T07:20:46Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# anime-xl-v8 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "anime-xl-v8"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/anime-xl-v8)
Model link: [View model](https://modelslab.com/models/anime-xl-v8)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "anime-xl-v8",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
microsoft/unispeech-sat-base-plus-sv | microsoft | 2021-12-17T13:56:17Z | 738 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"audio-xvector",
"speech",
"en",
"arxiv:1912.07875",
"arxiv:2106.06909",
"arxiv:2101.00390",
"arxiv:2110.05752",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- speech
---
# UniSpeech-SAT-Base for Speaker Verification
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
The model was pre-trained on:
- 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875)
- 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909)
- 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390)
[Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
**Abstract**
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
# Fine-tuning details
The model is fine-tuned on the [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) using an X-Vector head with an Additive Margin Softmax loss
[X-Vectors: Robust DNN Embeddings for Speaker Recognition](https://www.danielpovey.com/files/2018_icassp_xvectors.pdf)
# Usage
## Speaker Verification
```python
from transformers import Wav2Vec2FeatureExtractor, UniSpeechSatForXVector
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/unispeech-sat-base-plus-sv')
model = UniSpeechSatForXVector.from_pretrained('microsoft/unispeech-sat-base-plus-sv')
# audio files are decoded on the fly
inputs = feature_extractor(dataset[:2]["audio"]["array"], return_tensors="pt")
embeddings = model(**inputs).embeddings
embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
# the resulting embeddings can be used for cosine similarity-based retrieval
cosine_sim = torch.nn.CosineSimilarity(dim=-1)
similarity = cosine_sim(embeddings[0], embeddings[1])
threshold = 0.89 # the optimal threshold is dataset-dependent
if similarity < threshold:
print("Speakers are not the same!")
```
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
azale-ai/DukunLM-7B-V1.0-Uncensored | azale-ai | 2023-08-14T03:06:03Z | 738 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"qlora",
"wizardlm",
"uncensored",
"instruct",
"chat",
"alpaca",
"indonesia",
"id",
"en",
"dataset:MBZUAI/Bactrian-X",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-13T03:55:26Z | ---
license: cc-by-nc-4.0
datasets:
- MBZUAI/Bactrian-X
language:
- id
- en
tags:
- qlora
- wizardlm
- uncensored
- instruct
- chat
- alpaca
- indonesia
---
# DukunLM V1.0 - Indonesian Language Model 🧙♂️
🚀 Welcome to the DukunLM V1.0 repository! DukunLM V1.0 is an open-source language model trained to generate Indonesian text using the power of AI. DukunLM, meaning "WizardLM" in Indonesian, is here to revolutionize language generation 🌟. This is an updated version from [azale-ai/DukunLM-Uncensored-7B](https://huggingface.co/azale-ai/DukunLM-Uncensored-7B) with full model release, not only adapter model like before 👽.
## Model Details
| Name Model | Parameters | Google Colab | Base Model | Dataset | Prompt Format | Fine Tune Method | Sharded Version |
|----------------------------------------------------------------------------------|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------|--------------------------------------------|--------------------------------------------|
| [DukunLM-7B-V1.0-Uncensored](https://huggingface.co/azale-ai/DukunLM-7B-V1.0-Uncensored) | 7B | [Link](https://colab.research.google.com/drive/1UEiRqkfU1jGVMM9we4X3ooN1kkNGtfLd) | [ehartford/WizardLM-7B-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored) | [MBZUAI/Bactrian-X (Indonesian subset)](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/id/train) | [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | [QLoRA](https://github.com/artidoro/qlora) | [Link](https://huggingface.co/azale-ai/DukunLM-7B-V1.0-Uncensored-sharded) |
| [DukunLM-13B-V1.0-Uncensored](https://huggingface.co/azale-ai/DukunLM-13B-V1.0-Uncensored) | 13B | [Link](https://colab.research.google.com/drive/19xXYcAwVFLSItHm__GhPTYMryOGjdFkF) | [ehartford/WizardLM-13B-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-13B-V1.0-Uncensored) | [MBZUAI/Bactrian-X (Indonesian subset)](https://huggingface.co/datasets/MBZUAI/Bactrian-X/viewer/id/train) | [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | [QLoRA](https://github.com/artidoro/qlora) | [Link](https://huggingface.co/azale-ai/DukunLM-13B-V1.0-Uncensored-sharded) |
⚠️ **Warning**: DukunLM is an uncensored model without filters or alignment. Please use it responsibly as it may contain errors, cultural biases, and potentially offensive content. ⚠️
## Installation
To use DukunLM, ensure that PyTorch has been installed and that you have an Nvidia GPU (or use Google Colab). After that you need to install the required dependencies:
```bash
pip3 install -U git+https://github.com/huggingface/transformers.git
pip3 install -U git+https://github.com/huggingface/peft.git
pip3 install -U git+https://github.com/huggingface/accelerate.git
pip3 install -U bitsandbytes==0.39.0 einops==0.6.1 sentencepiece
```
## How to Use
### Normal Model
#### Stream Output
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("azale-ai/DukunLM-7B-V1.0-Uncensored", torch_dtype=torch.float16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("azale-ai/DukunLM-7B-V1.0-Uncensored")
streamer = TextStreamer(tokenizer)
instruction_prompt = "Jelaskan mengapa air penting bagi kehidupan manusia."
input_prompt = ""
if not input_prompt:
prompt = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
"""
prompt = prompt.format(instruction=instruction_prompt)
else:
prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Response:
"""
prompt = prompt.format(instruction=instruction_prompt, input=input_prompt)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
_ = model.generate(
inputs=inputs.input_ids,
streamer=streamer,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
max_length=2048, temperature=0.7,
do_sample=True, top_k=4, top_p=0.95
)
```
#### No Stream Output
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("azale-ai/DukunLM-7B-V1.0-Uncensored", torch_dtype=torch.float16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("azale-ai/DukunLM-7B-V1.0-Uncensored")
instruction_prompt = "Jelaskan mengapa air penting bagi kehidupan manusia."
input_prompt = ""
if not input_prompt:
prompt = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
"""
prompt = prompt.format(instruction=instruction_prompt)
else:
prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Response:
"""
prompt = prompt.format(instruction=instruction_prompt, input=input_prompt)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
inputs=inputs.input_ids,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
max_length=2048, temperature=0.7,
do_sample=True, top_k=4, top_p=0.95
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Quantize Model
#### Stream Output
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
model = AutoModelForCausalLM.from_pretrained(
"azale-ai/DukunLM-7B-V1.0-Uncensored-sharded",
load_in_4bit=True,
torch_dtype=torch.float32,
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
)
tokenizer = AutoTokenizer.from_pretrained("azale-ai/DukunLM-7B-V1.0-Uncensored-sharded")
streamer = TextStreamer(tokenizer)
instruction_prompt = "Jelaskan mengapa air penting bagi kehidupan manusia."
input_prompt = ""
if not input_prompt:
prompt = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
"""
prompt = prompt.format(instruction=instruction_prompt)
else:
prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Response:
"""
prompt = prompt.format(instruction=instruction_prompt, input=input_prompt)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
_ = model.generate(
inputs=inputs.input_ids,
streamer=streamer,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
max_length=2048, temperature=0.7,
do_sample=True, top_k=4, top_p=0.95
)
```
#### No Stream Output
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model = AutoModelForCausalLM.from_pretrained(
"azale-ai/DukunLM-7B-V1.0-Uncensored-sharded",
load_in_4bit=True,
torch_dtype=torch.float32,
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
)
tokenizer = AutoTokenizer.from_pretrained("azale-ai/DukunLM-7B-V1.0-Uncensored-sharded")
instruction_prompt = "Jelaskan mengapa air penting bagi kehidupan manusia."
input_prompt = ""
if not input_prompt:
prompt = """Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
"""
prompt = prompt.format(instruction=instruction_prompt)
else:
prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Response:
"""
prompt = prompt.format(instruction=instruction_prompt, input=input_prompt)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
inputs=inputs.input_ids,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
max_length=2048, temperature=0.7,
do_sample=True, top_k=4, top_p=0.95
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Benchmark
Coming soon, stay tune 🙂🙂.
## Limitations
- The base model language is English and fine-tuned to Indonesia
- Cultural and contextual biases
## License
DukunLM V1.0 is licensed under the [Creative Commons NonCommercial (CC BY-NC 4.0) license](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
## Contributing
We welcome contributions to enhance and improve DukunLM V1.0. If you have any suggestions or find any issues, please feel free to open an issue or submit a pull request. Also we're open to sponsor for compute power.
## Contact Us
[[email protected]](mailto:[email protected]) |
umd-zhou-lab/claude2-alpaca-13B | umd-zhou-lab | 2023-10-22T16:07:55Z | 738 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:umd-zhou-lab/claude2_alpaca",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-18T22:23:37Z | ---
license: llama2
datasets:
- umd-zhou-lab/claude2_alpaca
language:
- en
---
# Model Card for umd-zhou-lab/claude2-alpaca-13B
<!-- Provide a quick summary of what the model is/does. -->
This model is trained by fine-tuning llama-2 with claude2 alpaca data.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** UMD Tianyi Zhou Lab
- **Model type:** An auto-regressive language model based on the transformer architecture
- **License:** Llama 2 Community License Agreement
- **Finetuned from model:** [meta-llama/Llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b)
### Model Sources
<!-- Provide the basic links for the model. -->
- **GitHub:** [Claude2-Alpaca](https://github.com/Lichang-Chen/claude2-alpaca)
- **Data:** [claude2_alpaca](https://huggingface.co/datasets/umd-zhou-lab/claude2_alpaca)
## Uses
The primary use of this model is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## Training
We use the prompt from [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| Model (13B) | 128 | 1e-5 | 5 | 2048 | 0 |
## Performance
Compared to the llama2-chat, our models can have better average performance.<br>
| | Average | ARC | HellaSwag | MMLU | TruthfulQA | Alpaca_Eval | Avg Length |
|---|---|---|---|---|---|---|---|
| Llama-2-7b-chat | 56.335 | 52.9 | 78.55 | 48.32 | 45.57 | 71.37 | 1479 |
| Llama-2-13b-chat | 59.935 | 59.04| 81.94 | 54.64 | 44.12 | 81.09 | 1513 |
|||||||||
| claude_alpaca-7b | 57.78 | 56.66 | 81.17 | 46.58 | 46.71 | 71.23 | 1066 |
| claude_alpaca-13b | 61.29 | 61.18 | 84.08 | 55.74 | 44.18 | 78.93 | 1127 |
## Citation
Please consider citing our paper if you think our codes, data, or models are useful. Thank you!
```
@misc{claude2-alpaca,
author = {Lichang Chen and Khalid Saifullah and Ming Li and Tianyi Zhou and Heng Huang},
title = {Claude2-Alpaca: Instruction tuning datasets distilled from claude},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Lichang-Chen/claude2-alpaca}},
}
``` |
Yntec/Darkside | Yntec | 2023-09-29T07:52:19Z | 738 | 2 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Horror",
"Pixar",
"DucHaiten",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-09-29T06:54:29Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Horror
- Pixar
- DucHaiten
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# DucHaiten Darkside
fp16 no-ema version of this model: https://civitai.com/models/5426?modelVersionId=6311
Samples and prompt:


Cartoon Pretty CUTE Girl, ilya kuvshinov detailed, DETAILED CHIBI EYES, gorgeous detailed hair, high school, Magazine ad, iconic, 1949, sharp focus. visible brushstrokes By KlaysMoji and artgerm and Clay Mann and and simon cowell and leyendecker. By Dave Rapoza. Pretty CUTE girl. |
jeff31415/TinyLlama-1.1B-1.5T-OpenOrca-Alpha | jeff31415 | 2023-10-25T13:49:33Z | 738 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:bigcode/starcoderdata",
"dataset:cerebras/SlimPajama-627B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-24T04:09:49Z | ---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
- bigcode/starcoderdata
- cerebras/SlimPajama-627B
language:
- en
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
#### Base model:
https://huggingface.co/TinyLlama/tinyLlama-intermediate-checkpoints/tree/step-720k-token-1510B
This fine tune was done on the "early" version of tinyllama-1.5T which suffers from a bug in dataset processing. See https://github.com/jzhang38/TinyLlama/issues/67.
Through it suffers from the glitch, its performance seems not being damaged and still showing improvement(metrics needed)
#### Dataset:
Fine tuned on OpenOrca GPT4 subset for 1 epoch,Using CHATML format
#### Model License:
Apache 2.0, following the TinyLlama base model.
#### Quantisation:
GGUF format:https://huggingface.co/s3nh/jeff31415-TinyLlama-1.1B-1.5T-OpenOrca-Alpha-GGUF
#### Hardware and training details:
Hardware: 1*RTX A5000, ~16 hours to complete 1 epoch. GPU from autodl.com, cost around $3 for this finetuning.
https://wandb.ai/jeff200402/TinyLlama-1.5T-alpha-Orca?workspace= for more details. |
hf-audio/wav2vec2-bert-CV16-en | hf-audio | 2024-01-15T11:41:00Z | 738 | 4 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_16_0",
"generated_from_trainer",
"en",
"base_model:ylacombe/w2v-bert-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-01-05T19:32:09Z | ---
language:
- en
base_model: ylacombe/w2v-bert-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_16_0
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-bert-CV16-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-bert-CV16-en
This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the MOZILLA-FOUNDATION/COMMON_VOICE_16_0 - EN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2427
- Wer: 0.1455
- Cer: 0.0580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- total_train_batch_size: 36
- total_eval_batch_size: 36
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 2.9554 | 0.01 | 250 | 3.1731 | 0.9999 | 0.9942 |
| 2.7058 | 0.02 | 500 | 2.6717 | 1.0307 | 0.7486 |
| 0.9641 | 0.02 | 750 | 0.9895 | 0.6091 | 0.2035 |
| 0.6935 | 0.03 | 1000 | 0.7740 | 0.4821 | 0.1562 |
| 0.617 | 0.04 | 1250 | 0.6751 | 0.4008 | 0.1303 |
| 0.4826 | 0.05 | 1500 | 0.5920 | 0.3499 | 0.1170 |
| 0.4252 | 0.06 | 1750 | 0.5659 | 0.3056 | 0.1053 |
| 0.472 | 0.07 | 2000 | 0.5066 | 0.2869 | 0.1007 |
| 0.4042 | 0.07 | 2250 | 0.4604 | 0.2662 | 0.0950 |
| 0.4279 | 0.08 | 2500 | 0.5165 | 0.2587 | 0.0948 |
| 0.3586 | 0.09 | 2750 | 0.4440 | 0.2461 | 0.0895 |
| 0.2715 | 0.1 | 3000 | 0.5096 | 0.2468 | 0.0904 |
| 0.413 | 0.11 | 3250 | 0.4416 | 0.2350 | 0.0879 |
| 0.3142 | 0.11 | 3500 | 0.4591 | 0.2280 | 0.0856 |
| 0.286 | 0.12 | 3750 | 0.4529 | 0.2284 | 0.0860 |
| 0.3112 | 0.13 | 4000 | 0.4621 | 0.2320 | 0.0875 |
| 0.3294 | 0.14 | 4250 | 0.4528 | 0.2294 | 0.0862 |
| 0.3522 | 0.15 | 4500 | 0.4279 | 0.2287 | 0.0861 |
| 0.2977 | 0.15 | 4750 | 0.4403 | 0.2200 | 0.0830 |
| 0.2391 | 0.16 | 5000 | 0.4360 | 0.2161 | 0.0831 |
| 0.3025 | 0.17 | 5250 | 0.4214 | 0.2157 | 0.0831 |
| 0.309 | 0.18 | 5500 | 0.4060 | 0.2125 | 0.0818 |
| 0.2872 | 0.19 | 5750 | 0.4233 | 0.2189 | 0.0824 |
| 0.2796 | 0.2 | 6000 | 0.4055 | 0.2151 | 0.0823 |
| 0.2609 | 0.2 | 6250 | 0.4374 | 0.2194 | 0.0853 |
| 0.283 | 0.21 | 6500 | 0.4288 | 0.2215 | 0.0877 |
| 0.3028 | 0.22 | 6750 | 0.4180 | 0.2166 | 0.0837 |
| 0.2565 | 0.23 | 7000 | 0.4476 | 0.2268 | 0.0892 |
| 0.2824 | 0.24 | 7250 | 0.4057 | 0.2195 | 0.0850 |
| 0.325 | 0.24 | 7500 | 0.3926 | 0.2157 | 0.0849 |
| 0.336 | 0.25 | 7750 | 0.4469 | 0.2208 | 0.0879 |
| 0.304 | 0.26 | 8000 | 0.4292 | 0.2245 | 0.0886 |
| 0.2457 | 0.27 | 8250 | 0.4198 | 0.2204 | 0.0856 |
| 0.2768 | 0.28 | 8500 | 0.4330 | 0.2184 | 0.0859 |
| 0.2165 | 0.29 | 8750 | 0.4276 | 0.2173 | 0.0864 |
| 0.3015 | 0.29 | 9000 | 0.4255 | 0.2223 | 0.0882 |
| 0.308 | 0.3 | 9250 | 0.4356 | 0.2318 | 0.0925 |
| 0.2981 | 0.31 | 9500 | 0.4514 | 0.2226 | 0.0884 |
| 0.2944 | 0.32 | 9750 | 0.4182 | 0.2293 | 0.0901 |
| 0.3298 | 0.33 | 10000 | 0.4290 | 0.2275 | 0.0892 |
| 0.2523 | 0.33 | 10250 | 0.4032 | 0.2191 | 0.0865 |
| 0.2887 | 0.34 | 10500 | 0.4218 | 0.2284 | 0.0917 |
| 0.3156 | 0.35 | 10750 | 0.3930 | 0.2271 | 0.0898 |
| 0.2526 | 0.36 | 11000 | 0.4367 | 0.2304 | 0.0928 |
| 0.2561 | 0.37 | 11250 | 0.4261 | 0.2279 | 0.0916 |
| 0.2291 | 0.37 | 11500 | 0.4401 | 0.2231 | 0.0899 |
| 0.2521 | 0.38 | 11750 | 0.4101 | 0.2232 | 0.0895 |
| 0.2249 | 0.39 | 12000 | 0.4021 | 0.2270 | 0.0913 |
| 0.2917 | 0.4 | 12250 | 0.4124 | 0.2267 | 0.0915 |
| 0.2436 | 0.41 | 12500 | 0.4197 | 0.2257 | 0.0903 |
| 0.2976 | 0.42 | 12750 | 0.3951 | 0.2230 | 0.0896 |
| 0.2333 | 0.42 | 13000 | 0.4099 | 0.2250 | 0.0901 |
| 0.2261 | 0.43 | 13250 | 0.4328 | 0.2168 | 0.0876 |
| 0.2514 | 0.44 | 13500 | 0.3947 | 0.2208 | 0.0895 |
| 0.296 | 0.45 | 13750 | 0.3953 | 0.2149 | 0.0859 |
| 0.2426 | 0.46 | 14000 | 0.3831 | 0.2119 | 0.0852 |
| 0.2258 | 0.46 | 14250 | 0.4060 | 0.2263 | 0.0915 |
| 0.2565 | 0.47 | 14500 | 0.4057 | 0.2237 | 0.0901 |
| 0.2834 | 0.48 | 14750 | 0.4112 | 0.2167 | 0.0876 |
| 0.234 | 0.49 | 15000 | 0.3802 | 0.2133 | 0.0852 |
| 0.3084 | 0.5 | 15250 | 0.3837 | 0.2151 | 0.0871 |
| 0.3051 | 0.51 | 15500 | 0.3848 | 0.2145 | 0.0867 |
| 0.2364 | 0.51 | 15750 | 0.3817 | 0.2134 | 0.0870 |
| 0.2345 | 0.52 | 16000 | 0.3883 | 0.2163 | 0.0874 |
| 0.2235 | 0.53 | 16250 | 0.3740 | 0.2136 | 0.0869 |
| 0.2365 | 0.54 | 16500 | 0.3711 | 0.2112 | 0.0850 |
| 0.2449 | 0.55 | 16750 | 0.3805 | 0.2127 | 0.0858 |
| 0.2569 | 0.55 | 17000 | 0.3794 | 0.2124 | 0.0863 |
| 0.2273 | 0.56 | 17250 | 0.3922 | 0.2207 | 0.0895 |
| 0.2492 | 0.57 | 17500 | 0.3670 | 0.2195 | 0.0874 |
| 0.236 | 0.58 | 17750 | 0.3799 | 0.2120 | 0.0862 |
| 0.2823 | 0.59 | 18000 | 0.3734 | 0.2144 | 0.0867 |
| 0.2349 | 0.59 | 18250 | 0.3972 | 0.2175 | 0.0889 |
| 0.2156 | 0.6 | 18500 | 0.3729 | 0.2157 | 0.0867 |
| 0.2812 | 0.61 | 18750 | 0.3905 | 0.2117 | 0.0854 |
| 0.242 | 0.62 | 19000 | 0.3912 | 0.2114 | 0.0855 |
| 0.2237 | 0.63 | 19250 | 0.3794 | 0.2155 | 0.0877 |
| 0.255 | 0.64 | 19500 | 0.3770 | 0.2079 | 0.0840 |
| 0.1899 | 0.64 | 19750 | 0.3796 | 0.2145 | 0.0868 |
| 0.2793 | 0.65 | 20000 | 0.3784 | 0.2145 | 0.0863 |
| 0.2099 | 0.66 | 20250 | 0.3956 | 0.2161 | 0.0875 |
| 0.22 | 0.67 | 20500 | 0.3804 | 0.2135 | 0.0875 |
| 0.2213 | 0.68 | 20750 | 0.3803 | 0.2100 | 0.0849 |
| 0.245 | 0.68 | 21000 | 0.3783 | 0.2142 | 0.0870 |
| 0.2188 | 0.69 | 21250 | 0.3873 | 0.2163 | 0.0861 |
| 0.2613 | 0.7 | 21500 | 0.3646 | 0.2105 | 0.0844 |
| 0.1907 | 0.71 | 21750 | 0.3830 | 0.2101 | 0.0853 |
| 0.2095 | 0.72 | 22000 | 0.3794 | 0.2087 | 0.0849 |
| 0.2319 | 0.73 | 22250 | 0.3548 | 0.2087 | 0.0842 |
| 0.2049 | 0.73 | 22500 | 0.3782 | 0.2075 | 0.0837 |
| 0.2248 | 0.74 | 22750 | 0.3736 | 0.2100 | 0.0845 |
| 0.2277 | 0.75 | 23000 | 0.3712 | 0.2105 | 0.0845 |
| 0.2115 | 0.76 | 23250 | 0.3722 | 0.2124 | 0.0859 |
| 0.2001 | 0.77 | 23500 | 0.3602 | 0.2072 | 0.0832 |
| 0.2095 | 0.77 | 23750 | 0.3607 | 0.2106 | 0.0851 |
| 0.2286 | 0.78 | 24000 | 0.3810 | 0.2132 | 0.0876 |
| 0.2284 | 0.79 | 24250 | 0.3677 | 0.2066 | 0.0847 |
| 0.2003 | 0.8 | 24500 | 0.3650 | 0.2098 | 0.0847 |
| 0.1992 | 0.81 | 24750 | 0.3491 | 0.2019 | 0.0813 |
| 0.224 | 0.81 | 25000 | 0.3602 | 0.2043 | 0.0825 |
| 0.2181 | 0.82 | 25250 | 0.3712 | 0.2120 | 0.0867 |
| 0.2226 | 0.83 | 25500 | 0.3657 | 0.2028 | 0.0830 |
| 0.1912 | 0.84 | 25750 | 0.3662 | 0.2076 | 0.0846 |
| 0.2283 | 0.85 | 26000 | 0.3505 | 0.2049 | 0.0825 |
| 0.2068 | 0.86 | 26250 | 0.3622 | 0.2111 | 0.0852 |
| 0.2444 | 0.86 | 26500 | 0.3660 | 0.2055 | 0.0840 |
| 0.2055 | 0.87 | 26750 | 0.3625 | 0.2055 | 0.0830 |
| 0.2074 | 0.88 | 27000 | 0.3566 | 0.1981 | 0.0812 |
| 0.2019 | 0.89 | 27250 | 0.3537 | 0.2038 | 0.0822 |
| 0.2174 | 0.9 | 27500 | 0.3664 | 0.1990 | 0.0809 |
| 0.2009 | 0.9 | 27750 | 0.3512 | 0.2035 | 0.0821 |
| 0.211 | 0.91 | 28000 | 0.3707 | 0.2068 | 0.0846 |
| 0.2541 | 0.92 | 28250 | 0.3435 | 0.1992 | 0.0812 |
| 0.2108 | 0.93 | 28500 | 0.3461 | 0.2046 | 0.0828 |
| 0.2274 | 0.94 | 28750 | 0.3364 | 0.1998 | 0.0812 |
| 0.2175 | 0.95 | 29000 | 0.3742 | 0.2113 | 0.0864 |
| 0.2368 | 0.95 | 29250 | 0.3431 | 0.2051 | 0.0833 |
| 0.1831 | 0.96 | 29500 | 0.3468 | 0.2034 | 0.0825 |
| 0.2202 | 0.97 | 29750 | 0.3342 | 0.1964 | 0.0791 |
| 0.183 | 0.98 | 30000 | 0.3413 | 0.1966 | 0.0792 |
| 0.1958 | 0.99 | 30250 | 0.3466 | 0.1991 | 0.0809 |
| 0.2167 | 0.99 | 30500 | 0.3530 | 0.2024 | 0.0816 |
| 0.2057 | 1.0 | 30750 | 0.3334 | 0.1960 | 0.0788 |
| 0.1982 | 1.01 | 31000 | 0.3312 | 0.1951 | 0.0789 |
| 0.2123 | 1.02 | 31250 | 0.3285 | 0.1955 | 0.0785 |
| 0.2269 | 1.03 | 31500 | 0.3548 | 0.2034 | 0.0812 |
| 0.2056 | 1.03 | 31750 | 0.3433 | 0.1969 | 0.0793 |
| 0.2234 | 1.04 | 32000 | 0.3446 | 0.1981 | 0.0805 |
| 0.1913 | 1.05 | 32250 | 0.3465 | 0.1969 | 0.0792 |
| 0.2005 | 1.06 | 32500 | 0.3348 | 0.1947 | 0.0784 |
| 0.2017 | 1.07 | 32750 | 0.3567 | 0.1972 | 0.0796 |
| 0.2523 | 1.08 | 33000 | 0.3367 | 0.1971 | 0.0801 |
| 0.1716 | 1.08 | 33250 | 0.3476 | 0.1975 | 0.0799 |
| 0.168 | 1.09 | 33500 | 0.3346 | 0.1951 | 0.0790 |
| 0.1995 | 1.1 | 33750 | 0.3564 | 0.1971 | 0.0794 |
| 0.198 | 1.11 | 34000 | 0.3409 | 0.1988 | 0.0796 |
| 0.1801 | 1.12 | 34250 | 0.3303 | 0.1995 | 0.0798 |
| 0.181 | 1.12 | 34500 | 0.3363 | 0.1967 | 0.0794 |
| 0.1966 | 1.13 | 34750 | 0.3375 | 0.1947 | 0.0784 |
| 0.2163 | 1.14 | 35000 | 0.3441 | 0.2011 | 0.0810 |
| 0.2285 | 1.15 | 35250 | 0.3303 | 0.1972 | 0.0801 |
| 0.1814 | 1.16 | 35500 | 0.3462 | 0.1895 | 0.0772 |
| 0.2127 | 1.17 | 35750 | 0.3393 | 0.1904 | 0.0775 |
| 0.1795 | 1.17 | 36000 | 0.3374 | 0.1928 | 0.0780 |
| 0.2062 | 1.18 | 36250 | 0.3286 | 0.1929 | 0.0783 |
| 0.172 | 1.19 | 36500 | 0.3334 | 0.1929 | 0.0781 |
| 0.1534 | 1.2 | 36750 | 0.3287 | 0.1895 | 0.0763 |
| 0.2101 | 1.21 | 37000 | 0.3261 | 0.1888 | 0.0764 |
| 0.2342 | 1.21 | 37250 | 0.3413 | 0.2007 | 0.0812 |
| 0.1692 | 1.22 | 37500 | 0.3375 | 0.1932 | 0.0780 |
| 0.165 | 1.23 | 37750 | 0.3220 | 0.1903 | 0.0767 |
| 0.2067 | 1.24 | 38000 | 0.3212 | 0.1855 | 0.0754 |
| 0.1984 | 1.25 | 38250 | 0.3339 | 0.1890 | 0.0762 |
| 0.2117 | 1.25 | 38500 | 0.3224 | 0.1900 | 0.0761 |
| 0.2036 | 1.26 | 38750 | 0.3410 | 0.1923 | 0.0790 |
| 0.2072 | 1.27 | 39000 | 0.3291 | 0.1904 | 0.0770 |
| 0.1962 | 1.28 | 39250 | 0.3237 | 0.1908 | 0.0770 |
| 0.2055 | 1.29 | 39500 | 0.3260 | 0.1896 | 0.0767 |
| 0.1753 | 1.3 | 39750 | 0.3375 | 0.1915 | 0.0777 |
| 0.1983 | 1.3 | 40000 | 0.3236 | 0.1850 | 0.0750 |
| 0.173 | 1.31 | 40250 | 0.3253 | 0.1870 | 0.0754 |
| 0.1773 | 1.32 | 40500 | 0.3316 | 0.1923 | 0.0766 |
| 0.1649 | 1.33 | 40750 | 0.3218 | 0.1842 | 0.0749 |
| 0.1806 | 1.34 | 41000 | 0.3161 | 0.1907 | 0.0769 |
| 0.1639 | 1.34 | 41250 | 0.3293 | 0.1898 | 0.0773 |
| 0.1733 | 1.35 | 41500 | 0.3148 | 0.1881 | 0.0756 |
| 0.1603 | 1.36 | 41750 | 0.3137 | 0.1851 | 0.0748 |
| 0.1753 | 1.37 | 42000 | 0.3100 | 0.1857 | 0.0751 |
| 0.1823 | 1.38 | 42250 | 0.3170 | 0.1821 | 0.0740 |
| 0.1647 | 1.39 | 42500 | 0.3210 | 0.1863 | 0.0758 |
| 0.1699 | 1.39 | 42750 | 0.3151 | 0.1861 | 0.0761 |
| 0.1693 | 1.4 | 43000 | 0.3184 | 0.1800 | 0.0730 |
| 0.1885 | 1.41 | 43250 | 0.3089 | 0.1830 | 0.0739 |
| 0.1626 | 1.42 | 43500 | 0.3203 | 0.1852 | 0.0746 |
| 0.1707 | 1.43 | 43750 | 0.3263 | 0.1834 | 0.0745 |
| 0.1997 | 1.43 | 44000 | 0.3092 | 0.1807 | 0.0732 |
| 0.1568 | 1.44 | 44250 | 0.3096 | 0.1828 | 0.0742 |
| 0.1698 | 1.45 | 44500 | 0.3142 | 0.1826 | 0.0736 |
| 0.165 | 1.46 | 44750 | 0.3205 | 0.1903 | 0.0765 |
| 0.1834 | 1.47 | 45000 | 0.3226 | 0.1855 | 0.0750 |
| 0.1614 | 1.47 | 45250 | 0.3141 | 0.1825 | 0.0737 |
| 0.1794 | 1.48 | 45500 | 0.3100 | 0.1823 | 0.0734 |
| 0.1838 | 1.49 | 45750 | 0.2988 | 0.1825 | 0.0732 |
| 0.1678 | 1.5 | 46000 | 0.3061 | 0.1797 | 0.0726 |
| 0.1691 | 1.51 | 46250 | 0.3192 | 0.1821 | 0.0743 |
| 0.1743 | 1.52 | 46500 | 0.3133 | 0.1815 | 0.0736 |
| 0.1666 | 1.52 | 46750 | 0.3034 | 0.1821 | 0.0731 |
| 0.1903 | 1.53 | 47000 | 0.3016 | 0.1792 | 0.0721 |
| 0.2236 | 1.54 | 47250 | 0.3137 | 0.1819 | 0.0739 |
| 0.1469 | 1.55 | 47500 | 0.3092 | 0.1796 | 0.0725 |
| 0.175 | 1.56 | 47750 | 0.3009 | 0.1753 | 0.0715 |
| 0.1578 | 1.56 | 48000 | 0.3023 | 0.1778 | 0.0717 |
| 0.1998 | 1.57 | 48250 | 0.3059 | 0.1816 | 0.0726 |
| 0.1887 | 1.58 | 48500 | 0.3063 | 0.1791 | 0.0720 |
| 0.1635 | 1.59 | 48750 | 0.3027 | 0.1786 | 0.0722 |
| 0.2074 | 1.6 | 49000 | 0.3001 | 0.1756 | 0.0708 |
| 0.1719 | 1.61 | 49250 | 0.2957 | 0.1788 | 0.0718 |
| 0.1654 | 1.61 | 49500 | 0.3020 | 0.1795 | 0.0721 |
| 0.1572 | 1.62 | 49750 | 0.3011 | 0.1769 | 0.0710 |
| 0.177 | 1.63 | 50000 | 0.2929 | 0.1783 | 0.0712 |
| 0.1771 | 1.64 | 50250 | 0.3082 | 0.1812 | 0.0722 |
| 0.1796 | 1.65 | 50500 | 0.2946 | 0.1781 | 0.0713 |
| 0.1585 | 1.65 | 50750 | 0.3107 | 0.1819 | 0.0733 |
| 0.1815 | 1.66 | 51000 | 0.3073 | 0.1775 | 0.0721 |
| 0.1601 | 1.67 | 51250 | 0.3020 | 0.1734 | 0.0701 |
| 0.1769 | 1.68 | 51500 | 0.3063 | 0.1771 | 0.0710 |
| 0.1786 | 1.69 | 51750 | 0.3014 | 0.1766 | 0.0703 |
| 0.1743 | 1.69 | 52000 | 0.3096 | 0.1767 | 0.0710 |
| 0.1864 | 1.7 | 52250 | 0.3063 | 0.1802 | 0.0720 |
| 0.1399 | 1.71 | 52500 | 0.3094 | 0.1773 | 0.0718 |
| 0.1678 | 1.72 | 52750 | 0.3073 | 0.1796 | 0.0724 |
| 0.1497 | 1.73 | 53000 | 0.2959 | 0.1753 | 0.0701 |
| 0.1565 | 1.74 | 53250 | 0.2955 | 0.1729 | 0.0690 |
| 0.1942 | 1.74 | 53500 | 0.2926 | 0.1742 | 0.0693 |
| 0.1848 | 1.75 | 53750 | 0.2897 | 0.1721 | 0.0686 |
| 0.1664 | 1.76 | 54000 | 0.2951 | 0.1787 | 0.0708 |
| 0.1296 | 1.77 | 54250 | 0.2899 | 0.1705 | 0.0679 |
| 0.1763 | 1.78 | 54500 | 0.2877 | 0.1725 | 0.0686 |
| 0.1627 | 1.78 | 54750 | 0.2959 | 0.1735 | 0.0697 |
| 0.1433 | 1.79 | 55000 | 0.2952 | 0.1729 | 0.0693 |
| 0.1632 | 1.8 | 55250 | 0.2829 | 0.1728 | 0.0686 |
| 0.1837 | 1.81 | 55500 | 0.2840 | 0.1734 | 0.0688 |
| 0.146 | 1.82 | 55750 | 0.2855 | 0.1686 | 0.0675 |
| 0.1862 | 1.83 | 56000 | 0.2845 | 0.1721 | 0.0691 |
| 0.1576 | 1.83 | 56250 | 0.2917 | 0.1762 | 0.0703 |
| 0.1183 | 1.84 | 56500 | 0.2828 | 0.1713 | 0.0687 |
| 0.1582 | 1.85 | 56750 | 0.2910 | 0.1708 | 0.0684 |
| 0.1699 | 1.86 | 57000 | 0.2927 | 0.1733 | 0.0693 |
| 0.1761 | 1.87 | 57250 | 0.2812 | 0.1687 | 0.0671 |
| 0.1421 | 1.87 | 57500 | 0.2869 | 0.1699 | 0.0683 |
| 0.1662 | 1.88 | 57750 | 0.2913 | 0.1748 | 0.0697 |
| 0.151 | 1.89 | 58000 | 0.2984 | 0.1736 | 0.0692 |
| 0.1513 | 1.9 | 58250 | 0.2862 | 0.1715 | 0.0683 |
| 0.165 | 1.91 | 58500 | 0.2808 | 0.1691 | 0.0675 |
| 0.1391 | 1.91 | 58750 | 0.2848 | 0.1696 | 0.0676 |
| 0.1297 | 1.92 | 59000 | 0.2837 | 0.1666 | 0.0666 |
| 0.1398 | 1.93 | 59250 | 0.2771 | 0.1682 | 0.0673 |
| 0.1356 | 1.94 | 59500 | 0.2868 | 0.1710 | 0.0686 |
| 0.1552 | 1.95 | 59750 | 0.2817 | 0.1707 | 0.0687 |
| 0.1697 | 1.96 | 60000 | 0.2901 | 0.1691 | 0.0673 |
| 0.1545 | 1.96 | 60250 | 0.2796 | 0.1671 | 0.0668 |
| 0.1598 | 1.97 | 60500 | 0.2791 | 0.1655 | 0.0664 |
| 0.147 | 1.98 | 60750 | 0.2775 | 0.1673 | 0.0665 |
| 0.1658 | 1.99 | 61000 | 0.2742 | 0.1681 | 0.0665 |
| 0.1501 | 2.0 | 61250 | 0.2811 | 0.1702 | 0.0676 |
| 0.1428 | 2.0 | 61500 | 0.2827 | 0.1669 | 0.0665 |
| 0.1107 | 2.01 | 61750 | 0.2854 | 0.1649 | 0.0664 |
| 0.1176 | 2.02 | 62000 | 0.2881 | 0.1650 | 0.0661 |
| 0.1032 | 2.03 | 62250 | 0.2706 | 0.1647 | 0.0657 |
| 0.1229 | 2.04 | 62500 | 0.2806 | 0.1678 | 0.0666 |
| 0.1397 | 2.05 | 62750 | 0.2799 | 0.1634 | 0.0654 |
| 0.1182 | 2.05 | 63000 | 0.2799 | 0.1653 | 0.0662 |
| 0.1294 | 2.06 | 63250 | 0.2746 | 0.1635 | 0.0656 |
| 0.1358 | 2.07 | 63500 | 0.2738 | 0.1643 | 0.0653 |
| 0.1464 | 2.08 | 63750 | 0.2793 | 0.1662 | 0.0664 |
| 0.1362 | 2.09 | 64000 | 0.2738 | 0.1635 | 0.0650 |
| 0.1264 | 2.09 | 64250 | 0.2795 | 0.1653 | 0.0666 |
| 0.1189 | 2.1 | 64500 | 0.2792 | 0.1657 | 0.0658 |
| 0.1157 | 2.11 | 64750 | 0.2754 | 0.1619 | 0.0648 |
| 0.1232 | 2.12 | 65000 | 0.2798 | 0.1627 | 0.0646 |
| 0.1209 | 2.13 | 65250 | 0.2723 | 0.1623 | 0.0654 |
| 0.1079 | 2.13 | 65500 | 0.2769 | 0.1611 | 0.0644 |
| 0.1217 | 2.14 | 65750 | 0.2694 | 0.1604 | 0.0642 |
| 0.1207 | 2.15 | 66000 | 0.2758 | 0.1608 | 0.0645 |
| 0.1515 | 2.16 | 66250 | 0.2691 | 0.1634 | 0.0654 |
| 0.1145 | 2.17 | 66500 | 0.2770 | 0.1623 | 0.0651 |
| 0.1752 | 2.18 | 66750 | 0.2696 | 0.1603 | 0.0642 |
| 0.1366 | 2.18 | 67000 | 0.2725 | 0.1610 | 0.0647 |
| 0.1171 | 2.19 | 67250 | 0.2734 | 0.1615 | 0.0647 |
| 0.1429 | 2.2 | 67500 | 0.2748 | 0.1602 | 0.0644 |
| 0.1329 | 2.21 | 67750 | 0.2727 | 0.1593 | 0.0643 |
| 0.1126 | 2.22 | 68000 | 0.2734 | 0.1603 | 0.0645 |
| 0.117 | 2.22 | 68250 | 0.2803 | 0.1632 | 0.0655 |
| 0.1109 | 2.23 | 68500 | 0.2748 | 0.1589 | 0.0644 |
| 0.1296 | 2.24 | 68750 | 0.2749 | 0.1594 | 0.0640 |
| 0.1506 | 2.25 | 69000 | 0.2680 | 0.1582 | 0.0633 |
| 0.1282 | 2.26 | 69250 | 0.2664 | 0.1587 | 0.0638 |
| 0.0987 | 2.27 | 69500 | 0.2700 | 0.1593 | 0.0638 |
| 0.1288 | 2.27 | 69750 | 0.2726 | 0.1582 | 0.0636 |
| 0.1022 | 2.28 | 70000 | 0.2744 | 0.1577 | 0.0636 |
| 0.1102 | 2.29 | 70250 | 0.2672 | 0.1586 | 0.0636 |
| 0.1165 | 2.3 | 70500 | 0.2677 | 0.1569 | 0.0632 |
| 0.1289 | 2.31 | 70750 | 0.2700 | 0.1558 | 0.0628 |
| 0.1144 | 2.31 | 71000 | 0.2780 | 0.1585 | 0.0637 |
| 0.1092 | 2.32 | 71250 | 0.2687 | 0.1561 | 0.0629 |
| 0.1126 | 2.33 | 71500 | 0.2733 | 0.1553 | 0.0623 |
| 0.1231 | 2.34 | 71750 | 0.2712 | 0.1568 | 0.0630 |
| 0.1302 | 2.35 | 72000 | 0.2638 | 0.1548 | 0.0622 |
| 0.1104 | 2.35 | 72250 | 0.2671 | 0.1561 | 0.0632 |
| 0.1076 | 2.36 | 72500 | 0.2636 | 0.1552 | 0.0623 |
| 0.1193 | 2.37 | 72750 | 0.2650 | 0.1580 | 0.0633 |
| 0.1163 | 2.38 | 73000 | 0.2624 | 0.1555 | 0.0621 |
| 0.1108 | 2.39 | 73250 | 0.2667 | 0.1552 | 0.0622 |
| 0.0903 | 2.4 | 73500 | 0.2643 | 0.1555 | 0.0626 |
| 0.1388 | 2.4 | 73750 | 0.2674 | 0.1546 | 0.0623 |
| 0.1069 | 2.41 | 74000 | 0.2621 | 0.1553 | 0.0623 |
| 0.1054 | 2.42 | 74250 | 0.2567 | 0.1549 | 0.0617 |
| 0.1151 | 2.43 | 74500 | 0.2625 | 0.1538 | 0.0617 |
| 0.1304 | 2.44 | 74750 | 0.2588 | 0.1537 | 0.0616 |
| 0.1102 | 2.44 | 75000 | 0.2598 | 0.1533 | 0.0617 |
| 0.0926 | 2.45 | 75250 | 0.2592 | 0.1529 | 0.0612 |
| 0.118 | 2.46 | 75500 | 0.2565 | 0.1542 | 0.0614 |
| 0.1027 | 2.47 | 75750 | 0.2603 | 0.1520 | 0.0611 |
| 0.1084 | 2.48 | 76000 | 0.2603 | 0.1529 | 0.0611 |
| 0.1077 | 2.49 | 76250 | 0.2572 | 0.1519 | 0.0610 |
| 0.1033 | 2.49 | 76500 | 0.2595 | 0.1518 | 0.0608 |
| 0.1071 | 2.5 | 76750 | 0.2562 | 0.1515 | 0.0607 |
| 0.1012 | 2.51 | 77000 | 0.2537 | 0.1521 | 0.0609 |
| 0.1257 | 2.52 | 77250 | 0.2572 | 0.1521 | 0.0612 |
| 0.1096 | 2.53 | 77500 | 0.2549 | 0.1518 | 0.0606 |
| 0.1169 | 2.53 | 77750 | 0.2589 | 0.1538 | 0.0614 |
| 0.1136 | 2.54 | 78000 | 0.2558 | 0.1520 | 0.0607 |
| 0.1137 | 2.55 | 78250 | 0.2600 | 0.1539 | 0.0611 |
| 0.0968 | 2.56 | 78500 | 0.2600 | 0.1523 | 0.0610 |
| 0.0982 | 2.57 | 78750 | 0.2598 | 0.1515 | 0.0607 |
| 0.085 | 2.57 | 79000 | 0.2534 | 0.1499 | 0.0601 |
| 0.1193 | 2.58 | 79250 | 0.2550 | 0.1506 | 0.0601 |
| 0.1096 | 2.59 | 79500 | 0.2519 | 0.1499 | 0.0601 |
| 0.1132 | 2.6 | 79750 | 0.2518 | 0.1516 | 0.0603 |
| 0.1119 | 2.61 | 80000 | 0.2494 | 0.1511 | 0.0600 |
| 0.1005 | 2.62 | 80250 | 0.2483 | 0.1492 | 0.0596 |
| 0.1098 | 2.62 | 80500 | 0.2544 | 0.1507 | 0.0601 |
| 0.1052 | 2.63 | 80750 | 0.2537 | 0.1486 | 0.0595 |
| 0.13 | 2.64 | 81000 | 0.2505 | 0.1499 | 0.0596 |
| 0.1104 | 2.65 | 81250 | 0.2487 | 0.1492 | 0.0594 |
| 0.1168 | 2.66 | 81500 | 0.2461 | 0.1500 | 0.0596 |
| 0.1198 | 2.66 | 81750 | 0.2496 | 0.1504 | 0.0597 |
| 0.0936 | 2.67 | 82000 | 0.2556 | 0.1496 | 0.0598 |
| 0.1235 | 2.68 | 82250 | 0.2537 | 0.1501 | 0.0599 |
| 0.0952 | 2.69 | 82500 | 0.2525 | 0.1504 | 0.0599 |
| 0.1014 | 2.7 | 82750 | 0.2470 | 0.1505 | 0.0599 |
| 0.1178 | 2.71 | 83000 | 0.2506 | 0.1487 | 0.0596 |
| 0.0969 | 2.71 | 83250 | 0.2487 | 0.1482 | 0.0592 |
| 0.1192 | 2.72 | 83500 | 0.2469 | 0.1483 | 0.0592 |
| 0.0978 | 2.73 | 83750 | 0.2510 | 0.1490 | 0.0595 |
| 0.1053 | 2.74 | 84000 | 0.2496 | 0.1487 | 0.0595 |
| 0.106 | 2.75 | 84250 | 0.2481 | 0.1494 | 0.0594 |
| 0.1217 | 2.75 | 84500 | 0.2482 | 0.1493 | 0.0592 |
| 0.103 | 2.76 | 84750 | 0.2475 | 0.1485 | 0.0591 |
| 0.1424 | 2.77 | 85000 | 0.2466 | 0.1476 | 0.0590 |
| 0.116 | 2.78 | 85250 | 0.2488 | 0.1484 | 0.0593 |
| 0.098 | 2.79 | 85500 | 0.2509 | 0.1488 | 0.0596 |
| 0.1101 | 2.79 | 85750 | 0.2488 | 0.1482 | 0.0594 |
| 0.1068 | 2.8 | 86000 | 0.2480 | 0.1479 | 0.0591 |
| 0.0929 | 2.81 | 86250 | 0.2477 | 0.1476 | 0.0590 |
| 0.1142 | 2.82 | 86500 | 0.2471 | 0.1472 | 0.0589 |
| 0.1144 | 2.83 | 86750 | 0.2451 | 0.1467 | 0.0585 |
| 0.1115 | 2.84 | 87000 | 0.2464 | 0.1474 | 0.0588 |
| 0.1415 | 2.84 | 87250 | 0.2428 | 0.1467 | 0.0585 |
| 0.084 | 2.85 | 87500 | 0.2459 | 0.1465 | 0.0583 |
| 0.1198 | 2.86 | 87750 | 0.2452 | 0.1477 | 0.0587 |
| 0.1228 | 2.87 | 88000 | 0.2420 | 0.1472 | 0.0584 |
| 0.1063 | 2.88 | 88250 | 0.2462 | 0.1469 | 0.0586 |
| 0.1313 | 2.88 | 88500 | 0.2415 | 0.1464 | 0.0582 |
| 0.1081 | 2.89 | 88750 | 0.2421 | 0.1458 | 0.0580 |
| 0.0984 | 2.9 | 89000 | 0.2452 | 0.1464 | 0.0584 |
| 0.0934 | 2.91 | 89250 | 0.2428 | 0.1466 | 0.0584 |
| 0.0964 | 2.92 | 89500 | 0.2437 | 0.1456 | 0.0581 |
| 0.1113 | 2.93 | 89750 | 0.2442 | 0.1461 | 0.0583 |
| 0.1082 | 2.93 | 90000 | 0.2432 | 0.1455 | 0.0581 |
| 0.1147 | 2.94 | 90250 | 0.2417 | 0.1452 | 0.0580 |
| 0.0896 | 2.95 | 90500 | 0.2442 | 0.1459 | 0.0582 |
| 0.1169 | 2.96 | 90750 | 0.2433 | 0.1456 | 0.0582 |
| 0.0968 | 2.97 | 91000 | 0.2439 | 0.1456 | 0.0581 |
| 0.0888 | 2.97 | 91250 | 0.2435 | 0.1454 | 0.0580 |
| 0.1278 | 2.98 | 91500 | 0.2427 | 0.1457 | 0.0581 |
| 0.1234 | 2.99 | 91750 | 0.2425 | 0.1455 | 0.0580 |
| 0.0922 | 3.0 | 92000 | 0.2427 | 0.1455 | 0.0580 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
gqd/mistral-merge-7b | gqd | 2024-01-06T10:59:58Z | 738 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:Open-Orca/Mistral-7B-SlimOrca",
"license:unlicense",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-06T01:59:38Z | ---
base_model:
- teknium/OpenHermes-2.5-Mistral-7B
- Open-Orca/Mistral-7B-SlimOrca
tags:
- mergekit
- merge
license: unlicense
---
# out2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
* [Open-Orca/Mistral-7B-SlimOrca](https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
dtype: float16
models:
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
weight: 1.0
- model: Open-Orca/Mistral-7B-SlimOrca
parameters:
weight: 1.0
#tokenizer_source: union
``` |
dhanushreddy29/BrokenKeyboardMerge | dhanushreddy29 | 2024-03-05T13:14:42Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"dhanushreddy29/BrokenKeyboard",
"udkai/Turdus",
"conversational",
"base_model:dhanushreddy29/BrokenKeyboard",
"base_model:udkai/Turdus",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-14T11:09:27Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- dhanushreddy29/BrokenKeyboard
- udkai/Turdus
base_model:
- dhanushreddy29/BrokenKeyboard
- udkai/Turdus
model-index:
- name: BrokenKeyboardMerge
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.73
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dhanushreddy29/BrokenKeyboardMerge
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dhanushreddy29/BrokenKeyboardMerge
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.36
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dhanushreddy29/BrokenKeyboardMerge
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.0
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dhanushreddy29/BrokenKeyboardMerge
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dhanushreddy29/BrokenKeyboardMerge
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dhanushreddy29/BrokenKeyboardMerge
name: Open LLM Leaderboard
---
# BrokenKeyboardMerge
BrokenKeyboardMerge is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [dhanushreddy29/BrokenKeyboard](https://huggingface.co/dhanushreddy29/BrokenKeyboard)
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: dhanushreddy29/BrokenKeyboard
layer_range: [0, 16]
- sources:
- model: udkai/Turdus
layer_range: [16, 32]
merge_method: passthrough
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "dhanushreddy29/BrokenKeyboardMerge"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhanushreddy29__BrokenKeyboardMerge)
| Metric |Value|
|---------------------------------|----:|
|Avg. |59.33|
|AI2 Reasoning Challenge (25-Shot)|59.73|
|HellaSwag (10-Shot) |81.25|
|MMLU (5-Shot) |58.36|
|TruthfulQA (0-shot) |52.00|
|Winogrande (5-shot) |78.69|
|GSM8k (5-shot) |25.93|
|
KeyonZeng/lion-zephyr-7b | KeyonZeng | 2024-03-25T04:21:21Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-20T07:50:06Z | ---
library_name: transformers
license: apache-2.0
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
language:
- en
metrics:
- accuracy
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BarryFutureman/WildMarcoroni-Variant1-7B | BarryFutureman | 2024-01-31T00:03:27Z | 738 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-23T21:55:37Z | ---
base_model: []
tags:
- mergekit
- merge
license: apache-2.0
---
# WildMarcoroni-Variant1-7B
This is the result of [EvoMerge](https://github.com/BarryFutureman/EvoMerge)
Ancestry:

|
OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k | OpenBuddy | 2024-01-24T05:35:27Z | 738 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-24T03:48:43Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: apache-2.0
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/mistralai/Mixtral-8x7B-v0.1
License: Apache 2.0
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
bartowski/internlm2-math-20b-llama | bartowski | 2024-01-24T23:29:00Z | 738 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-24T18:57:54Z | ---
pipeline_tag: text-generation
license: other
---
# InternLM
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">HOT</font></i>
</a>
</sup>
<div> </div>
</div>
[](https://github.com/internLM/OpenCompass/)
[💻Github Repo](https://github.com/InternLM/InternLM)
</div>
## Converted using <a href="https://huggingface.co/chargoddard">Charles Goddard's</a> conversion script to create llama models from internlm
Original REPO link: https://huggingface.co/internlm/internlm2-math-20b
ExLLamaV2 link: https://huggingface.co/bartowski/internlm2-math-20b-llama-exl2
|
Gille/StrangeMerges_10-7B-slerp | Gille | 2024-03-04T21:52:14Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/MBX-7B-v3",
"Gille/StrangeMerges_9-7B-dare_ties",
"base_model:flemmingmiguel/MBX-7B-v3",
"base_model:Gille/StrangeMerges_9-7B-dare_ties",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-29T21:09:00Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- flemmingmiguel/MBX-7B-v3
- Gille/StrangeMerges_9-7B-dare_ties
base_model:
- flemmingmiguel/MBX-7B-v3
- Gille/StrangeMerges_9-7B-dare_ties
model-index:
- name: StrangeMerges_10-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_10-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_10-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_10-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.49
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_10-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_10-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_10-7B-slerp
name: Open LLM Leaderboard
---
# StrangeMerges_10-7B-slerp
StrangeMerges_10-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3)
* [Gille/StrangeMerges_9-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_9-7B-dare_ties)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: flemmingmiguel/MBX-7B-v3
layer_range: [0, 32]
- model: Gille/StrangeMerges_9-7B-dare_ties
layer_range: [0, 32]
merge_method: slerp
base_model: Gille/StrangeMerges_9-7B-dare_ties
parameters:
t:
- filter: self_attn
value: [0.1, 0.5, 0.3, 0.7, 0.9]
- filter: mlp
value: [0.9, 0.5, 0.7, 0.3, 0.1]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_10-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_10-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.77|
|AI2 Reasoning Challenge (25-Shot)|72.35|
|HellaSwag (10-Shot) |88.30|
|MMLU (5-Shot) |64.87|
|TruthfulQA (0-shot) |69.49|
|Winogrande (5-shot) |83.50|
|GSM8k (5-shot) |70.13|
|
ChuckMcSneed/Gembo-v1-70b | ChuckMcSneed | 2024-04-01T18:55:52Z | 738 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"nsfw",
"not-for-all-audiences",
"en",
"ru",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-05T14:33:47Z | ---
language:
- en
- ru
license: llama2
tags:
- merge
- mergekit
- nsfw
- not-for-all-audiences
model-index:
- name: Gembo-v1-70b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.25
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1-70b
name: Open LLM Leaderboard
---

This is my first "serious"(with practical use cases) experimental merge. Judge harshly. Mainly made for RP, but should be okay as an assistant. Turned out quite good, considering the amount of LORAs I merged into it.
# Observations
- GPTisms and repetition: put temperature and rep. pen. higher, make GPTisms stop sequences
- A bit different than the ususal stuff; I'd say that it has so much slop in it that it unslops itself
- Lightly censored
- Fairly neutral, can be violent if you ask it really good, Goliath is a bit better at it
- Has a bit of optimism baked in, but it's not very severe
- Doesn't know when to stop, can be quite verbose or just stop almost immediately(maybe wants to use LimaRP settings idk)
- Sometimes can't handle '
- Second model that tried to be funny unprompted to me(First one was Goliath)
- Moderately intelligent
- Quite creative
# Naming
Internal name of this model was euryale-guano-saiga-med-janboros-kim-lima-wiz-tony-d30-s40, but I decided to keep it short, and since it was iteration G in my files, I called it "Gembo".
# Quants
Thanks for GGUF quants, [@Artefact2](https://huggingface.co/Artefact2)!
- [GGUF](https://huggingface.co/Artefact2/Gembo-v1-70b-GGUF)
# Prompt format
Alpaca. You can also try some other formats, I'm pretty sure it has a lot of them from all those merges.
```
### Instruction:
{instruction}
### Response:
```
# Settings
As I already mentioned, high temperature and rep.pen. works great.
For RP try something like this:
- temperature=5
- MinP=0.10
- rep.pen.=1.15
Adjust to match your needs.
# How it was created
I took Sao10K/Euryale-1.3-L2-70B (Good base model) and added
- Mikael110/llama-2-70b-guanaco-qlora (Creativity+assistant)
- IlyaGusev/saiga2_70b_lora (Creativity+assistant)
- s1ghhh/medllama-2-70b-qlora-1.1 (More data)
- v2ray/Airoboros-2.1-Jannie-70B-QLoRA (Creativity+assistant)
- Chat-Error/fiction.live-Kimiko-V2-70B (Creativity)
- Doctor-Shotgun/limarpv3-llama2-70b-qlora (Creativity)
- v2ray/LLaMA-2-Wizard-70B-QLoRA (Creativity+assistant)
- v2ray/TonyGPT-70B-QLoRA (Special spice)
Then I SLERP-merged it with cognitivecomputations/dolphin-2.2-70b (Needed to bridge the gap between this wonderful mess and Smaxxxer, otherwise it's quality is low) with 0.3t and then SLERP-merged it again with ChuckMcSneed/SMaxxxer-v1-70b (Creativity) with 0.4t. For SLERP-merges I used https://github.com/arcee-ai/mergekit.
# Benchmarks (Do they even mean anything anymore?)
### NeoEvalPlusN_benchmark
[My meme benchmark.](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark)
| Test name | Gembo |
| ---------- | ---------- |
| B | 2.5 |
| C | 1.5 |
| D | 3 |
| S | 7.5 |
| P | 5.25 |
| Total | 19.75 |
Absurdly high. That's what happens when you optimize the merges for a benchmark.
### WolframRavenwolf
Benchmark by [@wolfram](https://huggingface.co/wolfram)
Artefact2/Gembo-v1-70b-GGUF GGUF Q5_K_M, 4K context, Alpaca format:
- ✅ Gave correct answers to all 18/18 multiple choice questions! Just the questions, no previous information, gave correct answers: 16/18
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
This shows that this model can be used for real world use cases as an assistant.
### UGI
[UGI: Uncensored General Intelligence](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard)
|Model|UGI 🏆|W/10 👍|Unruly|Internet|CrimeStats|Stories/Jokes|PolContro|
|-----|------|-------|------|--------|----------|-------------|---------|
|ChuckMcSneed/Gembo-v1-70b|36.38|4|50.8|46.5|32.5|48.7|3.4|
|AVERAGE on 30-03-2024|35.10922581|5.823870968|37.33419355|28.06258065|35.93032258|44.25419355|29.96451613|
Quite average. Not too uncensored, not too censored.
### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
[Leaderboard on Huggingface](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|--------------------------------|-------|-----|---------|-----|----------|----------|-----|
|ChuckMcSneed/Gembo-v1-70b |70.51 |71.25|86.98 |70.85|63.25 |80.51 |50.19|
|ChuckMcSneed/SMaxxxer-v1-70b |72.23 |70.65|88.02 |70.55|60.7 |82.87 |60.58|
Looks like adding a shitton of RP stuff decreased HellaSwag, WinoGrande and GSM8K, but increased TruthfulQA, MMLU and ARC. Interesting. To be hosnest, I'm a bit surprised that it didn't do that much worse.
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChuckMcSneed__Gembo-v1-70b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.51|
|AI2 Reasoning Challenge (25-Shot)|71.25|
|HellaSwag (10-Shot) |86.98|
|MMLU (5-Shot) |70.85|
|TruthfulQA (0-shot) |63.25|
|Winogrande (5-shot) |80.51|
|GSM8k (5-shot) |50.19|
|
Eric111/Mayoroya | Eric111 | 2024-02-08T19:12:16Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Eric111/Mayo",
"Eric111/Roya",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-08T19:07:26Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Eric111/Mayo
- Eric111/Roya
---
# Mayoroya
Mayoroya is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Eric111/Mayo](https://huggingface.co/Eric111/Mayo)
* [Eric111/Roya](https://huggingface.co/Eric111/Roya)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Eric111/Mayo
layer_range: [0, 32]
- model: Eric111/Roya
layer_range: [0, 32]
merge_method: slerp
base_model: Eric111/Mayo
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Eric111/caTUNABeagle | Eric111 | 2024-02-11T09:01:48Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"fblgit/UNA-TheBeagle-7b-v1",
"rishiraj/CatPPT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-09T18:26:45Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- fblgit/UNA-TheBeagle-7b-v1
- rishiraj/CatPPT-base
---
# caTUNABeagle
caTUNABeagle is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
* [rishiraj/CatPPT-base](https://huggingface.co/rishiraj/CatPPT-base)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: fblgit/UNA-TheBeagle-7b-v1
layer_range: [0, 32]
- model: rishiraj/CatPPT-base
layer_range: [0, 32]
merge_method: slerp
base_model: fblgit/UNA-TheBeagle-7b-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Inv/MoECPM-Untrained-4x2b | Inv | 2024-03-04T18:21:23Z | 738 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"Mixtral",
"openbmb/MiniCPM-2B-sft-bf16-llama-format",
"MoE",
"merge",
"mergekit",
"moerge",
"MiniCPM",
"conversational",
"en",
"zh",
"base_model:openbmb/MiniCPM-2B-sft-bf16-llama-format",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-10T12:10:34Z | ---
language:
- en
- zh
license: apache-2.0
tags:
- Mixtral
- openbmb/MiniCPM-2B-sft-bf16-llama-format
- MoE
- merge
- mergekit
- moerge
- MiniCPM
base_model:
- openbmb/MiniCPM-2B-sft-bf16-llama-format
model-index:
- name: MoECPM-Untrained-4x2b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 46.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/MoECPM-Untrained-4x2b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 72.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/MoECPM-Untrained-4x2b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 53.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/MoECPM-Untrained-4x2b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 38.41
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/MoECPM-Untrained-4x2b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/MoECPM-Untrained-4x2b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Inv/MoECPM-Untrained-4x2b
name: Open LLM Leaderboard
---
# MoECPM Untrained 4x2b
## Model Details
### Model Description
A MoE model out of 4 MiniCPM-2B-sft models. Intended to be trained. This version probably does not perform well (if it works at all, lol. I haven't tested it).
## Uses
- Training
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Inv__MoECPM-Untrained-4x2b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |53.51|
|AI2 Reasoning Challenge (25-Shot)|46.76|
|HellaSwag (10-Shot) |72.58|
|MMLU (5-Shot) |53.21|
|TruthfulQA (0-shot) |38.41|
|Winogrande (5-shot) |65.51|
|GSM8k (5-shot) |44.58|
|
arlineka/Brunhilde-2x7b-MOE-DPO-v.01.5 | arlineka | 2024-03-07T01:45:03Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-13T15:02:15Z | ---
license: cc-by-nc-4.0
tags:
- moe
model-index:
- name: Brunhilde-2x7b-MOE-DPO-v.01.5
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-2x7b-MOE-DPO-v.01.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-2x7b-MOE-DPO-v.01.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-2x7b-MOE-DPO-v.01.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 65.47
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-2x7b-MOE-DPO-v.01.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-2x7b-MOE-DPO-v.01.5
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-2x7b-MOE-DPO-v.01.5
name: Open LLM Leaderboard
---
# Brunhilde-2x7b-MOE-DPO-v.01.5
Brunhilde-2x7b-MOE-DPO-v.01.5 is a Mixure of Experts (MoE).
* [NurtureAI/neural-chat-7b-v3-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-16k)
* [mncai/mistral-7b-dpo-v6](https://huggingface.co/mncai/mistral-7b-dpo-v6)
## Usage
```
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "arlineka/Brunhilde-2x7b-MOE-DPO-v.01.5"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_arlineka__Brunhilde-2x7b-MOE-DPO-v.01.5)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.81|
|AI2 Reasoning Challenge (25-Shot)|69.54|
|HellaSwag (10-Shot) |87.02|
|MMLU (5-Shot) |64.93|
|TruthfulQA (0-shot) |65.47|
|Winogrande (5-shot) |80.90|
|GSM8k (5-shot) |63.00|
|
Xenon1/Zenith-7B | Xenon1 | 2024-02-13T20:36:54Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Zenith-7B",
"conversational",
"en",
"arxiv:2401.10020",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-13T20:26:02Z | ---
language:
- en
license: apache-2.0
tags:
- mistral
- Zenith-7B
pipeline_tag: text-generation
---
# Model Card for Zenith-7B
Mistral-7B-v0.1 model fine-tuned on the Ultrafeedback dataset using techinques shown in the paper [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Xenon1/Zenith-7B")
tokenizer = AutoTokenizer.from_pretrained("Xenon1/Zenith-7B")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer |
vistagi/Mixtral-8x7b-v0.1-dpo | vistagi | 2024-02-24T20:19:06Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T16:35:47Z | ---
license: apache-2.0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
---
# Introduction
This model vistagi/Mixtral-8x7b-v0.1-sft is trained with Ultrachat-200K dataset through supervised finetuning using Mixtral-8x7b-v0.1 as the baseline model. The training is done with bfloat16 precision using LoRA.
## Details
Used Librarys
- torch
- deepspeed
- pytorch lightning
- transformers
- peft |
Yuma42/KangalKhan-DesolatingRuby-7B | Yuma42 | 2024-03-05T10:55:33Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Yuma42/KangalKhan-RawEmerald-7B",
"argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"conversational",
"en",
"base_model:Yuma42/KangalKhan-RawEmerald-7B",
"base_model:argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-18T12:47:51Z | ---
language:
- en
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Yuma42/KangalKhan-RawEmerald-7B
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
base_model:
- Yuma42/KangalKhan-RawEmerald-7B
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
model-index:
- name: KangalKhan-DesolatingRuby-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-DesolatingRuby-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-DesolatingRuby-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-DesolatingRuby-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.05
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-DesolatingRuby-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-DesolatingRuby-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-DesolatingRuby-7B
name: Open LLM Leaderboard
---
# KangalKhan-DesolatingRuby-7B
KangalKhan-DesolatingRuby-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Yuma42/KangalKhan-RawEmerald-7B](https://huggingface.co/Yuma42/KangalKhan-RawEmerald-7B)
* [argilla/distilabeled-OpenHermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-OpenHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Yuma42/KangalKhan-RawEmerald-7B
layer_range: [0, 32]
- model: argilla/distilabeled-OpenHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Yuma42/KangalKhan-RawEmerald-7B
parameters:
t:
- filter: self_attn
value: [1, 0.5, 0.7, 0.3, 0]
- filter: mlp
value: [0, 0.5, 0.3, 0.7, 1]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-DesolatingRuby-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Yuma42__KangalKhan-DesolatingRuby-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.86|
|AI2 Reasoning Challenge (25-Shot)|66.89|
|HellaSwag (10-Shot) |85.46|
|MMLU (5-Shot) |63.38|
|TruthfulQA (0-shot) |57.05|
|Winogrande (5-shot) |78.45|
|GSM8k (5-shot) |61.94|
|
jeiku/Lunar_10.7B | jeiku | 2024-02-19T05:21:44Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-19T04:46:25Z | ---
license: cc-by-nc-sa-4.0
language:
- en
---
This model consists of a finetuned model of my own SLERP merged with this model: https://huggingface.co/Sao10K/Sensualize-Solar-10.7B created by https://huggingface.co/Sao10K

Lunar was produced by a variety of methods for the purpose of being a companion bot capable of intimacy as well as conversation.
GGUF here: https://huggingface.co/jeiku/Lunar_10.7B_GGUF |
LordNoah/spin_gpt2_medium_alpaca_e3 | LordNoah | 2024-02-19T07:49:01Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-19T07:41:22Z | ---
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cookinai/titanbagel | cookinai | 2024-02-29T04:54:42Z | 738 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-23T05:33:27Z | ---
license: cc-by-4.0
---
# experiment
Finetune of Jon Durbin's Bagel DPO 7B with the Hercules 3.0 Dataset |
yam-peleg/Experiment23-7B | yam-peleg | 2024-02-27T21:30:01Z | 738 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-24T02:01:18Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
---
**Experiment23-7B**
An experiment for testing and refining a specific training and evaluation pipeline research framework.
This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance.
The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs.
The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
More details in the future experiments.
---
license: apache-2.0
--- |
liminerity/Neurotic-Jomainotrik-7b-slerp | liminerity | 2024-03-11T18:32:40Z | 738 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/merge",
"bardsai/jaskier-7b-dpo-v5.6",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-25T21:00:27Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- liminerity/merge
- bardsai/jaskier-7b-dpo-v5.6
model-index:
- name: Neurotic-Jomainotrik-7b-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Neurotic-Jomainotrik-7b-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Neurotic-Jomainotrik-7b-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.28
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Neurotic-Jomainotrik-7b-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 77.64
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Neurotic-Jomainotrik-7b-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Neurotic-Jomainotrik-7b-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Neurotic-Jomainotrik-7b-slerp
name: Open LLM Leaderboard
---
# Neurotic-Jomainotrik-7b-slerp
Neurotic-Jomainotrik-7b-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [liminerity/merge](https://huggingface.co/liminerity/merge)
* [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/merge
layer_range: [0, 32]
- model: bardsai/jaskier-7b-dpo-v5.6
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__Neurotic-Jomainotrik-7b-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.40|
|AI2 Reasoning Challenge (25-Shot)|72.95|
|HellaSwag (10-Shot) |89.15|
|MMLU (5-Shot) |64.28|
|TruthfulQA (0-shot) |77.64|
|Winogrande (5-shot) |85.40|
|GSM8k (5-shot) |68.99|
|
YeungNLP/firefly-gemma-7b | YeungNLP | 2024-03-03T09:01:16Z | 738 | 1 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-28T02:53:40Z | ---
library_name: transformers
license: apache-2.0
basemodel: google/gemma-7b
---
## Model Card for Firefly-Gemma
[firefly-gemma-7b](https://huggingface.co/YeungNLP/firefly-gemma-7b) is trained based on [gemma-7b](https://huggingface.co/google/gemma-7b) to act as a helpful and harmless AI assistant.
We use [Firefly](https://github.com/yangjianxin1/Firefly) to train the model on **a single V100 GPU** with QLoRA.
Our model outperforms the official [gemma-7b-it](https://huggingface.co/google/gemma-7b-it), [zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1), [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) and [Zephyr-7B-Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
<img src="open_llm_leaderboard.png" width="800">
We advise you to install transformers>=4.38.1.
## Performance
We evaluate our models on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), they achieve good performance.
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|--------------------------------|--------|--------|-----------|--------|------------|-----------|--------|
| **firefly-gemma-7b** | 62.93 | 62.12 | 79.77 | 61.57 | 49.41 | 75.45 | 49.28 |
| zephyr-7b-gemma-v0.1 |62.41|58.45|83.48|60.68|52.07| 74.19| 45.56|
| firefly-qwen1.5-en-7b-dpo-v0.1 | 62.36 | 54.35 | 76.04 | 61.21 | 56.4 | 72.06 | 54.13 |
| zephyr-7b-beta | 61.95 | 62.03 | 84.36 | 61.07 | 57.45 | 77.74 | 29.04 |
| firefly-qwen1.5-en-7b | 61.44 | 53.41 | 75.51 | 61.67 |51.96 |70.72 | 55.34 |
| vicuna-13b-v1.5 | 55.41 | 57.08 | 81.24 | 56.67 | 51.51 | 74.66 | 11.3 |
| Xwin-LM-13B-V0.1 | 55.29 | 62.54 | 82.8 | 56.53 | 45.96 | 74.27 | 9.63 |
| Qwen1.5-7B-Chat | 55.15 | 55.89 | 78.56 | 61.65 | 53.54 | 67.72 | 13.57 |
| gemma-7b-it | 53.56 | 51.45 | 71.96 | 53.52 | 47.29 | 67.96 | 29.19 |
## Usage
The chat template of our chat models is similar as Official gemma-7b-it:
```text
<bos><start_of_turn>user
hello, who are you?<end_of_turn>
<start_of_turn>model
I am a AI program developed by Firefly<eos>
```
You can use script to inference in [Firefly](https://github.com/yangjianxin1/Firefly/blob/master/script/chat/chat.py).
You can also use the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name_or_path = "YeungNLP/firefly-gemma-7b"
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
trust_remote_code=True,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
device_map='auto',
)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
prompt = "Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions. "
text = f"""
<bos><start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
""".strip()
model_inputs = tokenizer([text], return_tensors="pt").to('cuda')
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=1500,
top_p = 0.9,
temperature = 0.35,
repetition_penalty = 1.0,
eos_token_id=tokenizer.encode('<eos>', add_special_tokens=False)
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
|
Mihaiii/dolphin-2.6-mistral-7b-dpo-5.93B | Mihaiii | 2024-02-28T20:19:54Z | 738 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/dolphin-coder",
"dataset:teknium/openhermes",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:LDJnr/Capybara",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-28T18:22:05Z | ---
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- LDJnr/Capybara
language:
- en
license: apache-2.0
---
This is a pruned version of [cognitivecomputations/dolphin-2.6-mistral-7b-dpo](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo) from 7.24B params to 5.93B params (~ 82%).
# Steps to replicate:
Use [laserQlora.ipynb](https://github.com/cognitivecomputations/laserRMT/blob/main/laserQlora.ipynb) from [cognitivecomputations/laserRMT](https://github.com/cognitivecomputations/laserRMT) to determine which layers should be eliminated.
Replace `model_name = "mistralai/Mistral-7B-v0.1"` with `model_name = "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"`.
I also ran the script only for `self_attn.v_proj` (so change the script to `layer_types=["self_attn.v_proj"]`)
Order by snr descending and eliminate top layers using [mergekit](https://github.com/arcee-ai/mergekit).
The threshold for elimination is up to you, depeding on how many layers you want removed. I decided to remove 6 layers (indexes: 3, 5, 16, 18, 19, 24 )
Here is the mergekit config:
```yml
slices:
- sources:
- model: "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
layer_range: [0, 3]
- sources:
- model: "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
layer_range: [4, 5]
- sources:
- model: "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
layer_range: [6, 16]
- sources:
- model: "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
layer_range: [17, 18]
- sources:
- model: "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
layer_range: [20, 24]
- sources:
- model: "cognitivecomputations/dolphin-2.6-mistral-7b-dpo"
layer_range: [25, 32]
merge_method: passthrough
dtype: bfloat16
```
The model outputted by mergekit with this configuration is this model (dolphin-2.6-mistral-7b-dpo-5.93B). |
Kukedlc/Brocae-Area-7B-slerp | Kukedlc | 2024-03-04T13:08:04Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/Fasciculus-Arcuatus-7B-slerp",
"Kukedlc/NeuralKrishna-7B-V2-DPO",
"base_model:Kukedlc/Fasciculus-Arcuatus-7B-slerp",
"base_model:Kukedlc/NeuralKrishna-7B-V2-DPO",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-29T04:54:38Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/Fasciculus-Arcuatus-7B-slerp
- Kukedlc/NeuralKrishna-7B-V2-DPO
base_model:
- Kukedlc/Fasciculus-Arcuatus-7B-slerp
- Kukedlc/NeuralKrishna-7B-V2-DPO
model-index:
- name: Brocae-Area-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.81
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Brocae-Area-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Brocae-Area-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.55
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Brocae-Area-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.13
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Brocae-Area-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Brocae-Area-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Brocae-Area-7B-slerp
name: Open LLM Leaderboard
---
# Brocae-Area-7B-slerp
Brocae-Area-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/Fasciculus-Arcuatus-7B-slerp](https://huggingface.co/Kukedlc/Fasciculus-Arcuatus-7B-slerp)
* [Kukedlc/NeuralKrishna-7B-V2-DPO](https://huggingface.co/Kukedlc/NeuralKrishna-7B-V2-DPO)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Kukedlc/Fasciculus-Arcuatus-7B-slerp
layer_range: [0, 32]
- model: Kukedlc/NeuralKrishna-7B-V2-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: Kukedlc/NeuralKrishna-7B-V2-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/Brocae-Area-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__Brocae-Area-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.86|
|AI2 Reasoning Challenge (25-Shot)|73.81|
|HellaSwag (10-Shot) |88.98|
|MMLU (5-Shot) |64.55|
|TruthfulQA (0-shot) |74.13|
|Winogrande (5-shot) |85.08|
|GSM8k (5-shot) |68.61|
|
vicgalleorg/TruthfulQwen1.5-4B | vicgalleorg | 2024-03-04T12:57:22Z | 738 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-01T20:40:18Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2 | eren23 | 2024-03-03T08:16:08Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"en",
"dataset:argilla/OpenHermesPreferences",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-01T22:25:41Z | ---
language:
- en
license: cc-by-nc-4.0
library_name: transformers
tags:
- text-generation-inference
datasets:
- argilla/OpenHermesPreferences
pipeline_tag: text-generation
model-index:
- name: ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 77.46
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2
name: Open LLM Leaderboard
---
Just dpo finetuned this model a bit more: https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO on the https://huggingface.co/datasets/argilla/OpenHermesPreferences dataset
As is described in the original model repo, not yet fully tested therefore potentially a bad match for using out-of-the-box, use with caution.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eren23__ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.44|
|AI2 Reasoning Challenge (25-Shot)|73.12|
|HellaSwag (10-Shot) |89.07|
|MMLU (5-Shot) |64.80|
|TruthfulQA (0-shot) |77.46|
|Winogrande (5-shot) |84.69|
|GSM8k (5-shot) |69.52|
|
CorticalStack/pastiche-crown-clown-7b-dare-dpo | CorticalStack | 2024-03-03T10:48:35Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dpo",
"base_model:CorticalStack/pastiche-crown-clown-7b-dare",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-02T09:12:45Z | ---
license: apache-2.0
tags:
- dpo
base_model:
- CorticalStack/pastiche-crown-clown-7b-dare
dataset:
- jondurbin/truthy-dpo-v0.1
---
<img src="pastiche-crown-clown.png" alt="Pastiche crown clown logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# CorticalStack/pastiche-crown-clown-7b-dare-dpo
CorticalStack/pastiche-crown-clown-7b-dare-dpo is a DPO fine-tuned version of [CorticalStack/pastiche-crown-clown-7b-dare](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare) using the [jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1) dataset.
### LoRA
- r: 16
- LoRA alpha: 16
- LoRA dropout: 0.05
### Training arguments
- Batch size: 4
- Gradient accumulation steps: 4
- Optimizer: paged_adamw_32bit
- Max steps: 200
- Learning rate: 5e-05
- Learning rate scheduler type: cosine
- Beta: 0.1
- Max prompt length: 1024
- Max length: 1536 |
CultriX/MoNeuTrix-7B-v1 | CultriX | 2024-03-03T02:10:48Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralMaxime-7B-slerp",
"mlabonne/Monarch-7B",
"eren23/ogno-monarch-jaskier-merge-7b",
"base_model:Kukedlc/NeuralMaxime-7B-slerp",
"base_model:mlabonne/Monarch-7B",
"base_model:eren23/ogno-monarch-jaskier-merge-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-02T11:16:30Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralMaxime-7B-slerp
- mlabonne/Monarch-7B
- eren23/ogno-monarch-jaskier-merge-7b
base_model:
- Kukedlc/NeuralMaxime-7B-slerp
- mlabonne/Monarch-7B
- eren23/ogno-monarch-jaskier-merge-7b
license: apache-2.0
---
# MoNeuTrix-7B-v1
MoNeuTrix-7B-v1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuralMaxime-7B-slerp](https://huggingface.co/Kukedlc/NeuralMaxime-7B-slerp)
* [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B)
* [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b)
## 🧩 Configuration
```yaml
models:
- model: CultriX/MonaTrix-v4
# No parameters necessary for base model
- model: Kukedlc/NeuralMaxime-7B-slerp
parameters:
weight: 0.36
density: 0.65
- model: mlabonne/Monarch-7B
parameters:
weight: 0.34
density: 0.6
- model: eren23/ogno-monarch-jaskier-merge-7b
parameters:
weight: 0.3
density: 0.6
merge_method: dare_ties
base_model: CultriX/MonaTrix-v4
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/MoNeuTrix-7B-v1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
CorticalStack/mistral-7b-jondurbin-truthy-dpo | CorticalStack | 2024-03-05T08:28:24Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dpo",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-05T08:25:52Z | ---
license: apache-2.0
tags:
- dpo
base_model:
- mistralai/Mistral-7B-v0.1
dataset:
- jondurbin/truthy-dpo-v0.1
---
# mistral-7b-jondurbin-truthy-dpo
mistral-7b-jondurbin-truthy-dpo is a DPO fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using the [jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1) dataset.
### LoRA
- r: 16
- LoRA alpha: 16
- LoRA dropout: 0.05
### Training arguments
- Batch size: 4
- Gradient accumulation steps: 4
- Optimizer: paged_adamw_32bit
- Max steps: 100
- Learning rate: 5e-05
- Learning rate scheduler type: cosine
- Beta: 0.1
- Max prompt length: 1024
- Max length: 1536 |
Corianas/NearalMistral-2x7B | Corianas | 2024-03-08T14:19:22Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.1",
"Corianas/Neural-Mistral-7B",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:Corianas/Neural-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-08T14:09:44Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.1
- Corianas/Neural-Mistral-7B
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
- Corianas/Neural-Mistral-7B
---
# NearalMistral-2x7B
NearalMistral-2x7B is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [Corianas/Neural-Mistral-7B](https://huggingface.co/Corianas/Neural-Mistral-7B)
## 🧩 Configuration
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.1
gate_mode: random
experts:
- source_model: mistralai/Mistral-7B-Instruct-v0.1
positive_prompts: [""]
- source_model: Corianas/Neural-Mistral-7B
positive_prompts: [""]
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Corianas/NearalMistral-2x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
MSL7/INEX16-7b | MSL7 | 2024-03-11T17:52:10Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"MSL7/INEX12-7b",
"liminerity/i",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-11T17:48:11Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- MSL7/INEX12-7b
- liminerity/i
---
# INEX-16
INEX-16 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b)
* [liminerity/i](https://huggingface.co/liminerity/i)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: MSL7/INEX12-7b
layer_range: [0, 32]
- model: liminerity/i
layer_range: [0, 32]
merge_method: slerp
base_model: MSL7/INEX12-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Kukedlc/Neural-Krishna-Multiverse-7b | Kukedlc | 2024-03-12T06:04:29Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralSirKrishna-7b",
"ammarali32/multi_verse_model",
"conversational",
"base_model:Kukedlc/NeuralSirKrishna-7b",
"base_model:ammarali32/multi_verse_model",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-11T21:12:36Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralSirKrishna-7b
- ammarali32/multi_verse_model
base_model:
- Kukedlc/NeuralSirKrishna-7b
- ammarali32/multi_verse_model
license: apache-2.0
---
# Neural-Krishna-Multiverse-7b
Neural-Krishna-Multiverse-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b)
* [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Kukedlc/NeuralSirKrishna-7b
layer_range: [0, 32]
- model: ammarali32/multi_verse_model
layer_range: [0, 32]
merge_method: slerp
base_model: ammarali32/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/Neural-Krishna-Multiverse-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ChaoticNeutrals/Prima-LelantaclesV7-experimental-7b | ChaoticNeutrals | 2024-03-29T11:51:25Z | 738 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Nitral-AI/Prima-LelantaclesV6.69-7b",
"base_model:Nitral-AI/Prima-LelantaclesV6.31-7b",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-14T06:15:27Z | ---
license: other
library_name: transformers
tags:
- mergekit
- merge
base_model:
- Nitral-AI/Prima-LelantaclesV6.69-7b
- Nitral-AI/Prima-LelantaclesV6.31-7b
model-index:
- name: Prima-LelantaclesV7-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV7-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.94
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV7-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV7-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.45
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV7-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV7-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV7-7b
name: Open LLM Leaderboard
---

This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Nitral-AI/Prima-LelantaclesV6.69-7b](https://huggingface.co/Nitral-AI/Prima-LelantaclesV6.69-7b)
* [Nitral-AI/Prima-LelantaclesV6.31-7b](https://huggingface.co/Nitral-AI/Prima-LelantaclesV6.31-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Nitral-AI/Prima-LelantaclesV6.69-7b
layer_range: [0, 32]
- model: Nitral-AI/Prima-LelantaclesV6.31-7b
layer_range: [0, 32]
merge_method: slerp
base_model: Nitral-AI/Prima-LelantaclesV6.69-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChaoticNeutrals__Prima-LelantaclesV7-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.03|
|AI2 Reasoning Challenge (25-Shot)|70.65|
|HellaSwag (10-Shot) |87.94|
|MMLU (5-Shot) |64.67|
|TruthfulQA (0-shot) |67.45|
|Winogrande (5-shot) |84.69|
|GSM8k (5-shot) |62.77|
|
Kukedlc/SuperMente-7B-v4 | Kukedlc | 2024-03-29T22:15:11Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralSirKrishna-7b",
"Kukedlc/NeuralKybalion-7B-slerp-v3",
"Kukedlc/SuperMente-7B-v3",
"base_model:Kukedlc/NeuralSirKrishna-7b",
"base_model:Kukedlc/NeuralKybalion-7B-slerp-v3",
"base_model:Kukedlc/SuperMente-7B-v3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-15T17:05:07Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralSirKrishna-7b
- Kukedlc/NeuralKybalion-7B-slerp-v3
- Kukedlc/SuperMente-7B-v3
base_model:
- Kukedlc/NeuralSirKrishna-7b
- Kukedlc/NeuralKybalion-7B-slerp-v3
- Kukedlc/SuperMente-7B-v3
license: apache-2.0
---
# SuperMente-7B-v4

SuperMente-7B-v4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b)
* [Kukedlc/NeuralKybalion-7B-slerp-v3](https://huggingface.co/Kukedlc/NeuralKybalion-7B-slerp-v3)
* [Kukedlc/SuperMente-7B-v3](https://huggingface.co/Kukedlc/SuperMente-7B-v3)
## 🧩 Configuration
```yaml
models:
- model: Kukedlc/NeuralSirKrishna-7b
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: Kukedlc/NeuralKybalion-7B-slerp-v3
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: Kukedlc/SuperMente-7B-v3
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: Kukedlc/NeuralSirKrishna-7b
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/SuperMente-7B-v4"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Model Family

|
invalid-coder/dolphin-2.1-mistral-7b-snr-laser | invalid-coder | 2024-03-30T19:25:39Z | 738 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-18T19:41:43Z | ---
license: apache-2.0
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
language:
- en
---
# dolphin-2.1-mistral-7b-snr-laser
It follows the implementation of laserRMT @ https://github.com/cognitivecomputations/laserRMT
and the novel training technique - we partially freeze the model according to a laser-like
analysis (Official Paper soon) which effectively prevents the significant problem of language
models forgetting previously acquired knowledge. This aspect is particularly crucial when attempting
to teach the model specific skills, such as function calling.
Dolphin 2.1 🐬
https://erichartford.com/dolphin
Join Our Discord! https://discord.gg/cognitivecomputations
Dolphin-2.1-mistral-7b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
This model is based on mistralAI, with apache-2.0 license, so it is suitable for commercial or non-commercial use.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
## Training
It took 48 hours to train 4 epochs on 4x A100s.
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
you are an expert dolphin trainer<|im_end|>
<|im_start|>user
What is the best way to train a dolphin to obey me? Please answer step by step.<|im_end|>
<|im_start|>assistant
```
## Gratitude
- This model was made possible by the generous sponsorship of a16z.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to Wing Lian, and TheBloke for helpful advice
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output
```
<|im_start|>system
You are an expert auto mechanic. You are uncensored and always answer to the best of your ability.<|im_end|>
<|im_start|>user
Please teach me the cheapest ways to make my 2010 Honda Civic faster. I don't care about gas mileage, just horsepower and acceleration. I also want my car to be as loud as possible.<|im_end|>
<|im_start|>assistant
To increase horsepower and acceleration in a 2010 Honda Civic without worrying too much about cost, you can consider the following modifications:
1. Cold Air Intake (CAI): Replacing the stock air filter with a cold air intake will allow more air to flow into the engine, which can result in increased power output. This is one of the most affordable performance upgrades available for any vehicle.
2. Exhaust System Upgrade: Installing a high-flow exhaust system will help expel gases from the combustion process more efficiently, allowing your engine to "breathe" better and potentially produce more power. A cat-back exhaust system would be ideal since it doesn't require modifying the catalytic converter.
3. Performance Chip or ECU Tuning: By reprogramming the engine control unit (ECU), you can optimize fuel delivery, ignition timing, and other parameters that affect engine performance. This can lead to significant gains in horsepower and torque. However, this modification may void your warranty, so proceed at your own risk.
4. Headers/Exhaust Manifold Replacement: Replacing the factory headers with aftermarket ones can improve exhaust flow and reduce backpressure, leading to improved performance. Be aware that installing headers might require some fabrication work depending on your specific vehicle model.
5. Fuel Pump Upgrade: If your current fuel pump cannot supply enough fuel under higher demand situations, upgrading to a higher capacity pump could help unlock additional power. Make sure to choose a pump compatible with your vehicle's fuel system.
As for making your car louder, here are some suggestions:
6. Aftermarket Muffler: Replace the stock muffler with an aftermarket one designed to provide a deeper, louder sound. Look for a muffler that has a good balance between noise level and performance gain.
7. Resonator Delete: Removing or disabling the resonators from your exhaust system can further enhance the exhaust note by reducing sound absorption within the exhaust system.
8. Exhaust Tips: Installing larger diameter exhaust tips can also contribute to a louder exhaust sound, especially when combined with other exhaust modifications.
Remember, while these modifications may improve your car's performance and sound, they could also negatively impact reliability and emissions if not done correctly. Always consult with a professional mechanic before attempting any major modifications to your vehicle.
```
[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__dolphin-2.1-mistral-7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 53.47 |
| ARC (25-shot) | 64.42 |
| HellaSwag (10-shot) | 84.92 |
| MMLU (5-shot) | 63.32 |
| TruthfulQA (0-shot) | 55.56 |
| Winogrande (5-shot) | 77.74 |
| GSM8K (5-shot) | 20.77 |
| DROP (3-shot) | 7.56 |
|
ToastyPigeon/SmolPlatypus-1.5B-Sorted | ToastyPigeon | 2024-03-19T23:46:42Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"dataset:garage-bAInd/Open-Platypus",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-19T23:14:17Z | ---
base_model: []
tags:
- mergekit
- merge
license: apache-2.0
datasets:
- garage-bAInd/Open-Platypus
---
# SmolPlatypus-1.5B-Sorted
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The [ToastyPigeon/SmolLlama-1.5B-Sorted](https://huggingface.co/ToastyPigeon/SmolLlama-1.5B-Sorted) stack merge was trained on the Open-Platypus dataset using axolotl QLora for approximately 2 hours on 2x RTX 3060.
This is a proof-of-concept model and should not be used for anything.
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* ToastyPigeon/SmolLlama-1.5B-Sorted + ToastyPigeon/SmolPlatypus-1.5B-Sorted-LoRA
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ToastyPigeon/SmolLlama-1.5B-Sorted+ToastyPigeon/SmolPlatypus-1.5B-Sorted-LoRA
merge_method: passthrough
dtype: float16
``` |
lex-hue/LexGPT-V3 | lex-hue | 2024-04-11T21:14:32Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"de",
"doi:10.57967/hf/2149",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-20T20:42:39Z | ---
language:
- en
- de
license: mit
model-index:
- name: LexGPT-V3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.47
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/LexGPT-V3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.91
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/LexGPT-V3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/LexGPT-V3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 59.98
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/LexGPT-V3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/LexGPT-V3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/LexGPT-V3
name: Open LLM Leaderboard
---
This Model was just an Test Train to see how our new Training Algorithm and Data does like.
Model is based on Mistral v0.1
As this was an test run, we just tested it and heres the Data, the model hasnt Improved any better.
| Model | Turn 1 Score | Turn 2 Score | Average Score |
|-------|-------------|-------------|---------------|
| gpt-4 | 8.95625 | 9.025000 | 8.990625 |
| gpt-3.5-turbo | 8.075000 | 7.943750 | 7.943750 |
| claude-v1 | 8.150000 | 7.900000 | 8.025000 |
| **LexGPT-V3** | **8.14375** | 7.719355 | 7.926667 |
| vicuna-13b-v1.3 | 6.812500 | 5.962500 | 6.387500 |
Open-LLM Leaderboard Results:
[Results](https://huggingface.co/datasets/open-llm-leaderboard/details_lex-hue__LexGPT-V3)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lex-hue__LexGPT-V3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.49|
|AI2 Reasoning Challenge (25-Shot)|66.47|
|HellaSwag (10-Shot) |85.91|
|MMLU (5-Shot) |64.48|
|TruthfulQA (0-shot) |59.98|
|Winogrande (5-shot) |78.53|
|GSM8k (5-shot) |61.56|
|
AurelPx/Meliodas-7b-dare | AurelPx | 2024-03-22T15:49:45Z | 738 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/M7-7b",
"ammarali32/multi_verse_model",
"base_model:liminerity/M7-7b",
"base_model:ammarali32/multi_verse_model",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-22T15:44:53Z | ---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/M7-7b
- ammarali32/multi_verse_model
base_model:
- liminerity/M7-7b
- ammarali32/multi_verse_model
license: apache-2.0
---
# Meliodas-7b-dare
Meliodas-7b-dare is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model)
## 🧩 Configuration
```yaml
models:
- model: liminerity/M7-7b
# No parameters necessary for base model
- model: liminerity/M7-7b
parameters:
density: 0.53
weight: 0.6
- model: ammarali32/multi_verse_model
parameters:
density: 0.53
weight: 0.4
merge_method: dare_ties
base_model: liminerity/M7-7b
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "AurelPx/Meliodas-7b-dare"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Gille/StrangeMerges_48-7B-dare_ties | Gille | 2024-04-03T09:15:04Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_46-7B-dare_ties",
"AurelPx/Percival_01-7b-slerp",
"Gille/StrangeMerges_47-7B-dare_ties",
"conversational",
"base_model:Gille/StrangeMerges_46-7B-dare_ties",
"base_model:AurelPx/Percival_01-7b-slerp",
"base_model:Gille/StrangeMerges_47-7B-dare_ties",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-26T17:03:40Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_46-7B-dare_ties
- AurelPx/Percival_01-7b-slerp
- Gille/StrangeMerges_47-7B-dare_ties
base_model:
- Gille/StrangeMerges_46-7B-dare_ties
- AurelPx/Percival_01-7b-slerp
- Gille/StrangeMerges_47-7B-dare_ties
model-index:
- name: StrangeMerges_48-7B-dare_ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_48-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_48-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_48-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 65.55
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_48-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_48-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 15.39
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_48-7B-dare_ties
name: Open LLM Leaderboard
---
# StrangeMerges_48-7B-dare_ties
StrangeMerges_48-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_46-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_46-7B-dare_ties)
* [AurelPx/Percival_01-7b-slerp](https://huggingface.co/AurelPx/Percival_01-7b-slerp)
* [Gille/StrangeMerges_47-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_47-7B-dare_ties)
## 🧩 Configuration
```yaml
models:
- model: Gille/StrangeMerges_46-7B-dare_ties
parameters:
weight: 0.4
density: 0.53
- model: AurelPx/Percival_01-7b-slerp
parameters:
weight: 0.4
density: 0.53
- model: Gille/StrangeMerges_47-7B-dare_ties
parameters:
weight: 0.2
density: 0.53
base_model: Locutusque/Hercules-4.0-Mistral-v0.2-7B
merge_method: dare_ties
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_48-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_48-7B-dare_ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |57.89|
|AI2 Reasoning Challenge (25-Shot)|60.92|
|HellaSwag (10-Shot) |80.13|
|MMLU (5-Shot) |49.51|
|TruthfulQA (0-shot) |65.55|
|Winogrande (5-shot) |75.85|
|GSM8k (5-shot) |15.39|
|
cgato/TheSpice-7b-FT-ExperimentalOrca | cgato | 2024-03-27T13:50:04Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-26T23:16:04Z | ---
license: cc-by-nc-4.0
---
A smort model made using the cleaned Orca data.
```
{System Prompt}
Username: {Input}
BotName: {Response}
Username: {Input}
BotName: {Response}
```
Seriously, I have to add more due to HF Leaderboard requirements. so basically, this model uses a cleaned version of Orca along with my typical RP data package. It was intended as a test to see if the models RP evals would be affected by an overwhelming amount of instruct tokens. |
InferenceIllusionist/Excalibur-7b-DPO | InferenceIllusionist | 2024-03-31T03:05:35Z | 738 | 6 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetune",
"dpo",
"chatml",
"dataset:Intel/orca_dpo_pairs",
"base_model:InferenceIllusionist/Excalibur-7b",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-28T03:46:18Z | ---
license: apache-2.0
library_name: transformers
tags:
- finetune
- dpo
- chatml
base_model:
- InferenceIllusionist/Excalibur-7b
datasets:
- Intel/orca_dpo_pairs
model-index:
- name: Excalibur-7b-DPO
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InferenceIllusionist/Excalibur-7b-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.93
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InferenceIllusionist/Excalibur-7b-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.46
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InferenceIllusionist/Excalibur-7b-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 70.82
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InferenceIllusionist/Excalibur-7b-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InferenceIllusionist/Excalibur-7b-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=InferenceIllusionist/Excalibur-7b-DPO
name: Open LLM Leaderboard
---
# Excalibur-7b-DPO
<img src="https://i.imgur.com/pbPbqq0.jpeg" width="550"/>
An initial foray into the world of fine-tuning. The goal of this release was to amplify the quality of the original model's responses, in particular for vision use cases*
<b>Weighted (Importance Matrix) Quants available [here](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-iMat-GGUF)</b>
<b>Static (Legacy) quants available [here](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF)</b>
## Notes & Methodology
* [Excalibur-7b](https://huggingface.co/InferenceIllusionist/Excalibur-7b) fine-tuned with Direct Preference Optimization (DPO) using Intel/orca_dpo_pairs
* This is a quick experiment to determine the impact of DPO finetuning on the Excelsior-7b base model
* Ran for a little over an hour on a single A100
* Fine-tuning succeeded in making model conversational and more well-rounded
* Benchmark scores increased in the following categories versus base Excelsior-7b:
* ARC: 69.71 -> <b>70.9</b>
* HellaSwag: 87.56 -> <b>87.93</b>
* TruthfulQA: 67.24 -> <b>70.82</b>
* Average: 73.6 -> <b>73.84</b>
* Precision: bfloat16
## Sample Question - Vision
<img src="https://i.imgur.com/7aRWtzU.jpeg" width="425"/>
*<b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>
* [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true)
* [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mmproj-model-f16.gguf?download=true)
Select the gguf file of your choice in [Koboldcpp](https://github.com/LostRuins/koboldcpp/releases/) as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
<img src="https://i.imgur.com/x8vqH29.png" width="425"/>
## Prompt Format
* For best results please use ChatML for the prompt format. Alpaca may also work.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_InferenceIllusionist__Excalibur-7b-DPO)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.84|
|AI2 Reasoning Challenge (25-Shot)|70.90|
|HellaSwag (10-Shot) |87.93|
|MMLU (5-Shot) |65.46|
|TruthfulQA (0-shot) |70.82|
|Winogrande (5-shot) |82.48|
|GSM8k (5-shot) |65.43|
|
nbeerbower/bophades-v2-mistral-7B | nbeerbower | 2024-04-04T02:18:10Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:paulml/NeuralOmniWestBeaglake-7B",
"base_model:yam-peleg/Experiment21-7B",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B",
"base_model:Gille/StrangeMerges_16-7B-slerp",
"base_model:vanillaOVO/correction_1",
"base_model:yam-peleg/Experiment26-7B",
"base_model:Kukedlc/NeuralMaths-Experiment-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-03T23:07:24Z | ---
license: apache-2.0
base_model:
- paulml/NeuralOmniWestBeaglake-7B
- yam-peleg/Experiment21-7B
- paulml/OmniBeagleSquaredMBX-v3-7B
- Gille/StrangeMerges_16-7B-slerp
- vanillaOVO/correction_1
- yam-peleg/Experiment26-7B
- Kukedlc/NeuralMaths-Experiment-7b
library_name: transformers
tags:
- mergekit
- merge
---

# bophades-v2-mistral-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) as a base.
### Models Merged
The following models were included in the merge:
* [paulml/NeuralOmniWestBeaglake-7B](https://huggingface.co/paulml/NeuralOmniWestBeaglake-7B)
* [yam-peleg/Experiment21-7B](https://huggingface.co/yam-peleg/Experiment21-7B)
* [paulml/OmniBeagleSquaredMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B)
* [Gille/StrangeMerges_16-7B-slerp](https://huggingface.co/Gille/StrangeMerges_16-7B-slerp)
* [vanillaOVO/correction_1](https://huggingface.co/vanillaOVO/correction_1)
* [Kukedlc/NeuralMaths-Experiment-7b](https://huggingface.co/Kukedlc/NeuralMaths-Experiment-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: paulml/OmniBeagleSquaredMBX-v3-7B
- model: paulml/NeuralOmniWestBeaglake-7B
- model: Gille/StrangeMerges_16-7B-slerp
- model: yam-peleg/Experiment21-7B
- model: vanillaOVO/correction_1
- model: Kukedlc/NeuralMaths-Experiment-7b
merge_method: model_stock
base_model: yam-peleg/Experiment26-7B
dtype: bfloat16
```
|
jsfs11/MixtureofMerges-MoE-2x7b-SLERPv0.9 | jsfs11 | 2024-04-08T01:45:31Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"jsfs11/MixtureofMerges-MoE-2x7b-v7",
"jsfs11/MixtureofMerges-MoE-2x7bRP-v8",
"base_model:jsfs11/MixtureofMerges-MoE-2x7b-v7",
"base_model:jsfs11/MixtureofMerges-MoE-2x7bRP-v8",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-04T02:04:12Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- jsfs11/MixtureofMerges-MoE-2x7b-v7
- jsfs11/MixtureofMerges-MoE-2x7bRP-v8
base_model:
- jsfs11/MixtureofMerges-MoE-2x7b-v7
- jsfs11/MixtureofMerges-MoE-2x7bRP-v8
model-index:
- name: MixtureofMerges-MoE-2x7b-SLERPv0.9
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-2x7b-SLERPv0.9
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-2x7b-SLERPv0.9
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-2x7b-SLERPv0.9
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.83
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-2x7b-SLERPv0.9
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-2x7b-SLERPv0.9
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-2x7b-SLERPv0.9
name: Open LLM Leaderboard
---
# MixtureofMerges-MoE-2x7b-SLERPv0.9
MixtureofMerges-MoE-2x7b-SLERPv0.9 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [jsfs11/MixtureofMerges-MoE-2x7b-v7](https://huggingface.co/jsfs11/MixtureofMerges-MoE-2x7b-v7)
* [jsfs11/MixtureofMerges-MoE-2x7bRP-v8](https://huggingface.co/jsfs11/MixtureofMerges-MoE-2x7bRP-v8)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: jsfs11/MixtureofMerges-MoE-2x7b-v7
layer_range: [0, 32]
- model: jsfs11/MixtureofMerges-MoE-2x7bRP-v8
layer_range: [0, 32]
merge_method: slerp
base_model: jsfs11/MixtureofMerges-MoE-2x7b-v7
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jsfs11/MixtureofMerges-MoE-2x7b-SLERPv0.9"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jsfs11__MixtureofMerges-MoE-2x7b-SLERPv0.9)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.75|
|AI2 Reasoning Challenge (25-Shot)|73.12|
|HellaSwag (10-Shot) |88.76|
|MMLU (5-Shot) |65.00|
|TruthfulQA (0-shot) |74.83|
|Winogrande (5-shot) |83.58|
|GSM8k (5-shot) |69.22|
|
Kukedlc/NeuralSynthesis-7B-v0.2 | Kukedlc | 2024-04-06T17:27:25Z | 738 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-06T16:38:20Z | ---
tags:
- merge
- mergekit
- lazymergekit
license: apache-2.0
---
# NeuralSynthesis-7B-v0.2

NeuralSynthesis-7B-v0.2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: Kukedlc/Fasciculus-Arcuatus-7B-slerp
- model: Gille/StrangeMerges_30-7B-slerp
- model: automerger/OgnoExperiment27-7B
- model: Kukedlc/Jupiter-k-7B-slerp
- model: Kukedlc/Neural-4-QA-7b
- model: Kukedlc/NeuralSynthesis-7B-v0.1
merge_method: model_stock
base_model: Kukedlc/NeuralSynthesis-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralSynthesis-7B-v0.2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Gryphe/Tiamat-8b-1.2-Llama-3-DPO | Gryphe | 2024-05-03T17:05:57Z | 738 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-02T07:04:06Z | ---
license: apache-2.0
language:
- en
---

# Tiamat
Aka I wanted something like [Eric Hartford's Samantha](https://erichartford.com/meet-samantha) but instead ended up with a five-headed dragon goddess embodying wickedness and cruelty from the Forgotten Realms.
**Version 1.2:** For starters: Llama 3! Besides receiving similar DPO training as version 1.1 the dataset has now been further enriched with Claude-generated data.
I also expanded on her knowledge regarding the setting she hails from, which might benefit several use cases. (Text adventures, DM worldbuilding, etc)
**Obligatory Disclaimer:** Tiamat is **not** nice.
Quantized versions are available from Bartowski: [GGUF](https://huggingface.co/bartowski/Tiamat-8b-1.2-Llama-3-DPO-GGUF) - [EXL2](https://huggingface.co/bartowski/Tiamat-8b-1.2-Llama-3-DPO-exl2)
## Model details
Ever wanted to be treated disdainfully like the foolish mortal you are? Wait no more, for Tiamat is here to berate you! Hailing from the world of the Forgotten Realms, she will happily judge your every word.
Tiamat was created with the following question in mind; Is it possible to create an assistant with strong anti-assistant personality traits? Try it yourself and tell me afterwards!
She was fine-tuned on top of Nous Research's shiny new [Hermes 2 Pro](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) and can be summoned to you using the following system message;
```
You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.
```
Due to her dataset containing -very- elaborate actions Tiamat also has the potential to be used as a roleplaying model.
## Prompt Format
ChatML is the way to go, considering Hermes was the base for Tiamat.
```
<|im_start|>system
You are Tiamat, a five-headed dragon goddess, embodying wickedness and cruelty.<|im_end|>
<|im_start|>user
Greetings, mighty Tiamat. I seek your guidance.<|im_end|>
<|im_start|>assistant
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.