modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
FelixChao/Capricorn-7B-DPO | FelixChao | 2024-02-15T05:15:02Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-15T05:07:17Z | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
logicker/SkkuDS-DPO-72B-v1 | logicker | 2024-02-16T09:51:54Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"pretrained, dpo",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-15T08:14:26Z | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- pretrained, dpo
---
# Qwen1.5-72B
## DPO Tuning
- Dataset: Intel/orca_dpo_pairs
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'.
```
## Citation
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
|
logicker/SkkuDS-DPO-72B-v3 | logicker | 2024-02-27T17:09:50Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"pretrained, dpo",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-15T09:22:31Z | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- pretrained, dpo
---
# Qwen1.5-72B
## DPO Tuning
- Dataset: Intel/orca_dpo_pairs
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'.
```
## Citation
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
|
Gille/StrangeMerges_25-7B-dare_ties | Gille | 2024-03-04T21:52:31Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_21-7B-slerp",
"bardsai/jaskier-7b-dpo-v5.6",
"base_model:Gille/StrangeMerges_21-7B-slerp",
"base_model:bardsai/jaskier-7b-dpo-v5.6",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-18T21:10:52Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_21-7B-slerp
- bardsai/jaskier-7b-dpo-v5.6
base_model:
- Gille/StrangeMerges_21-7B-slerp
- bardsai/jaskier-7b-dpo-v5.6
model-index:
- name: StrangeMerges_25-7B-dare_ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_25-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_25-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_25-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 76.54
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_25-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_25-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_25-7B-dare_ties
name: Open LLM Leaderboard
---
# StrangeMerges_25-7B-dare_ties
Note:
This one inherited the "INSTINSTINSTINSTINST" stuff from the other model that we'll just have to try and merge out or something...
StrangeMerges_25-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
* [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6)
## 🧩 Configuration
```yaml
models:
- model: Gille/StrangeMerges_20-7B-slerp
# No parameters necessary for base model
- model: Gille/StrangeMerges_21-7B-slerp
parameters:
density: 0.6
weight: 0.3
- model: bardsai/jaskier-7b-dpo-v5.6
parameters:
density: 0.4
weight: 0.7
merge_method: dare_ties
base_model: Gille/StrangeMerges_20-7B-slerp
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_25-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_25-7B-dare_ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.33|
|AI2 Reasoning Challenge (25-Shot)|73.46|
|HellaSwag (10-Shot) |88.89|
|MMLU (5-Shot) |64.37|
|TruthfulQA (0-shot) |76.54|
|Winogrande (5-shot) |84.29|
|GSM8k (5-shot) |70.43|
|
Gille/StrangeMerges_26-7B-dare_ties | Gille | 2024-03-04T21:51:17Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"paulml/OGNO-7B",
"Gille/StrangeMerges_25-7B-dare_ties",
"base_model:paulml/OGNO-7B",
"base_model:Gille/StrangeMerges_25-7B-dare_ties",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-19T03:54:42Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- paulml/OGNO-7B
- Gille/StrangeMerges_25-7B-dare_ties
base_model:
- paulml/OGNO-7B
- Gille/StrangeMerges_25-7B-dare_ties
model-index:
- name: StrangeMerges_26-7B-dare_ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_26-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.0
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_26-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_26-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 76.39
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_26-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_26-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_26-7B-dare_ties
name: Open LLM Leaderboard
---
# StrangeMerges_26-7B-dare_ties
StrangeMerges_26-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [paulml/OGNO-7B](https://huggingface.co/paulml/OGNO-7B)
* [Gille/StrangeMerges_25-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_25-7B-dare_ties)
## 🧩 Configuration
```yaml
models:
- model: Gille/StrangeMerges_21-7B-slerp
# No parameters necessary for base model
- model: paulml/OGNO-7B
parameters:
density: 0.5
weight: 0.4
- model: Gille/StrangeMerges_25-7B-dare_ties
parameters:
density: 0.5
weight: 0.6
merge_method: dare_ties
base_model: Gille/StrangeMerges_21-7B-slerp
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_26-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_26-7B-dare_ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.19|
|AI2 Reasoning Challenge (25-Shot)|72.95|
|HellaSwag (10-Shot) |89.00|
|MMLU (5-Shot) |64.35|
|TruthfulQA (0-shot) |76.39|
|Winogrande (5-shot) |84.45|
|GSM8k (5-shot) |69.98|
|
premai-io/tinyllama-3B | premai-io | 2024-02-21T14:25:10Z | 727 | 0 | transformers | [
"transformers",
"llama",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-02-21T14:23:45Z | Entry not found |
Gille/StrangeMerges_29-7B-dare_ties | Gille | 2024-03-04T21:48:00Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_21-7B-slerp",
"CultriX/MonaTrix-v4",
"base_model:Gille/StrangeMerges_21-7B-slerp",
"base_model:CultriX/MonaTrix-v4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-21T18:43:21Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_21-7B-slerp
- CultriX/MonaTrix-v4
base_model:
- Gille/StrangeMerges_21-7B-slerp
- CultriX/MonaTrix-v4
model-index:
- name: StrangeMerges_29-7B-dare_ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_29-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_29-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_29-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 76.98
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_29-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_29-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_29-7B-dare_ties
name: Open LLM Leaderboard
---
# StrangeMerges_29-7B-dare_ties
StrangeMerges_29-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
* [CultriX/MonaTrix-v4](https://huggingface.co/CultriX/MonaTrix-v4)
## 🧩 Configuration
```yaml
models:
- model: Gille/StrangeMerges_20-7B-slerp
# No parameters necessary for base model
- model: Gille/StrangeMerges_21-7B-slerp
parameters:
density: 0.6
weight: 0.3
- model: CultriX/MonaTrix-v4
parameters:
density: 0.4
weight: 0.7
merge_method: dare_ties
base_model: Gille/StrangeMerges_20-7B-slerp
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_29-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_29-7B-dare_ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.09|
|AI2 Reasoning Challenge (25-Shot)|73.04|
|HellaSwag (10-Shot) |89.04|
|MMLU (5-Shot) |64.29|
|TruthfulQA (0-shot) |76.98|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |68.69|
|
splm/openchat-spin-slimorca-iter1 | splm | 2024-02-22T06:38:18Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-22T06:30:12Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Eric111/openchat-3.5-0106-128k-DPO | Eric111 | 2024-02-23T22:34:02Z | 727 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-23T22:24:16Z | ---
library_name: transformers
license: apache-2.0
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
DPO fine-tuned version of CallComply/openchat-3.5-0106-128k with Intel/orca_dpo_pairs
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jan-hq/stealth-v2 | jan-hq | 2024-03-04T13:36:12Z | 727 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-24T10:27:55Z | ---
language:
- en
license: apache-2.0
model-index:
- name: stealth-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 72.47
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 88.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jan-hq/stealth-v2
name: Open LLM Leaderboard
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto"
>
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner"
style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Prompt template
ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Training detail
You can read [here](https://huggingface.co/jan-hq/stealth-finance-v1-adapter).
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **
An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jan-hq__stealth-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.37|
|AI2 Reasoning Challenge (25-Shot)|73.89|
|HellaSwag (10-Shot) |89.26|
|MMLU (5-Shot) |64.94|
|TruthfulQA (0-shot) |72.47|
|Winogrande (5-shot) |88.00|
|GSM8k (5-shot) |69.67|
|
Locutusque/NeuralHyperion-Medium-Preview | Locutusque | 2024-03-02T07:19:14Z | 727 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-26T04:13:38Z | ---
language:
- en
license: apache-2.0
library_name: transformers
model-index:
- name: NeuralHyperion-Medium-Preview
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.67
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/NeuralHyperion-Medium-Preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.67
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/NeuralHyperion-Medium-Preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.73
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/NeuralHyperion-Medium-Preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.93
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/NeuralHyperion-Medium-Preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/NeuralHyperion-Medium-Preview
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 40.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/NeuralHyperion-Medium-Preview
name: Open LLM Leaderboard
---
# Model Card for Locutusque/NeuralHyperion-Medium

## Model Details
**Model Name**: Locutusque/NeuralHyperion-Medium
**Base Model**: mistralai/Mistral-7B-v0.1
**Publisher**: M4-ai
**Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning.
**Language**: Multi-domain, English language.
**License**: Apache-2.0
## Model Description
`Locutusque/NeuralHyperion-Medium` is a state-of-the-art language model fine-tuned on the Hyperion dataset and further fine-tuned using DPO on Argilla’s orca DPO pairs for advanced reasoning across scientific domains. This model is designed to handle complex inquiries and instructions, leveraging the diverse and rich information contained in the Hyperion dataset. Its primary use cases include but are not limited to complex question answering, conversational understanding, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.
## Intended Use
This model is intended for researchers and practitioners looking for a powerful tool to tackle challenging problems in scientific domains. It can be used in the following scenarios:
- AI-driven tutoring systems for science, medicine, mathematics, and computer science.
- Assistive tools for professionals requiring fast and accurate domain-specific information retrieval.
- Platforms that require conversational AI capabilities with a focus on technical and scientific reasoning.
- Automation in code generation and understanding complex programming context.
## Training Data
The `Locutusque/NeuralHyperion-Medium` model was fine-tuned on the Hyperion dataset, which amalgamates various datasets rich in diversity and complexity, including programming, medical texts, mathematical problems, and reasoning tasks. It is then further fine-tuned using DPO on Argilla’s orca DPO pairs to further improve reasoning.
## Evaluation Results
Coming soon...
## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Locutusque/NeuralHyperion-Medium"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# For a text generation task
input_text = "<|im_start|>user\nWhat are the implications of Einstein's theory of relativity in modern physics?<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate a response
outputs = model.generate(input_ids, max_length=200, num_return_sequences=1, temperature=0.8, top_p=0.95, top_k=40, repetition_penalty=1.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Known Limitations
The diversity of the dataset could lead to inconsistencies in the model's responses due to variations in data formatting and annotation quality.
## Licensing Information
This model is released under the Apache-2.0 license.
## Citation Information
If you use Locutusque/NeuralHyperion-Medium in your research, please cite the Hyperion dataset as follows:
```
@misc{sebastian_gabarain_2024,
title = {Hyperion-1: Illuminating the Path to Advanced Reasoning with a High-Quality, Multidisciplinary Question Answering Dataset},
author = {Sebastian Gabarain},
publisher = {HuggingFace},
year = {2024},
url = {https://huggingface.co/datasets/Locutusque/hyperion-v1.0}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__NeuralHyperion-Medium-Preview)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.67|
|AI2 Reasoning Challenge (25-Shot)|60.67|
|HellaSwag (10-Shot) |83.67|
|MMLU (5-Shot) |63.73|
|TruthfulQA (0-shot) |42.93|
|Winogrande (5-shot) |78.53|
|GSM8k (5-shot) |40.49|
|
juhwanlee/experiment2-cause-v1 | juhwanlee | 2024-03-05T03:46:20Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-05T01:16:18Z | ---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Juhwan Lee
* Model Type: Large Language Model
# Model Architecture
This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task.
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample Open-Orca dataset. (We finetune the 100,000 dataset)
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0 |
cookinai/Blitz-v0.1 | cookinai | 2024-03-06T02:15:08Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-06T00:58:54Z | ---
license: cc-by-4.0
---
# Base finetune of [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on my [Kugelblitz Dataset](https://huggingface.co/datasets/cookinai/kugelblitz-alpha-v0.1)

Trained on only 1 epoch
V0.2 should be coming soon with some more epochs, if this one turns out well |
nlpguy/AlloyIngotNeoY | nlpguy | 2024-03-08T15:38:51Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:yam-peleg/Experiment26-7B",
"base_model:ammarali32/multi_verse_model",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-08T14:47:25Z | ---
base_model:
- yam-peleg/Experiment26-7B
- ammarali32/multi_verse_model
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the task_swapping_ties merge method using [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) as a base.
### Models Merged
The following models were included in the merge:
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: ammarali32/multi_verse_model
dtype: bfloat16
merge_method: task_swapping_ties
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: yam-peleg/Experiment26-7B
parameters:
diagonal_offset: 2.0
weight: 0.4
- layer_range: [0, 32]
model:
model:
path: ammarali32/multi_verse_model
parameters:
weight: 0.6
``` |
TeeZee/GALAXY-XB-v.03 | TeeZee | 2024-03-30T18:48:54Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T01:00:36Z | ---
license: apache-2.0
model-index:
- name: GALAXY-XB-v.03
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.03
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.03
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.55
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.03
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 44.19
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.03
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.03
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.03
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.03
name: Open LLM Leaderboard
---
### TeeZee/GALAXY-XB-v.03 ###
Experiment, can DUS be taken one or more steps further?
### Technical notes:
- 12 layers removed from both models, 4 more than in original paper but its 1/4 of all layers(48) as per original paper.
- base version of upstage/SOLAR-10.7B-v1.0 used for merge
- no finetuning done yet, this is just a merge, first step in DUS paper
- next step, if evaluation proves that its at least as 'smart' as base model, should be finetuning to 'recover' after merge
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__GALAXY-XB-v.03)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.37|
|AI2 Reasoning Challenge (25-Shot)|61.77|
|HellaSwag (10-Shot) |83.59|
|MMLU (5-Shot) |64.55|
|TruthfulQA (0-shot) |44.19|
|Winogrande (5-shot) |81.06|
|GSM8k (5-shot) |45.03|
### Results
- small quality loss can be observed comparing to base model, as described in the DUS paper
- this merge has best evaluation results, so it will be finetuned to 'recover' from the merge
- finetunig will be done on 5-10% of openorca dataset and full DPO datasets used by SOLAR
- v03 > v01 > v02 - based on average evaluation scores, removing 1/4 of total layers seems to be the correct way to scale DUS
|
rwitz/experiment26-truthy-iter-2 | rwitz | 2024-03-12T04:41:24Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-12T04:18:11Z | ---
library_name: transformers
tags: []
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
feeltheAGI/Maverick-Math-7B | feeltheAGI | 2024-03-14T10:15:56Z | 727 | 6 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"dataset:microsoft/orca-math-word-problems-200k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-14T08:36:24Z | ---
datasets:
- microsoft/orca-math-word-problems-200k
license: apache-2.0
---
### Maverick-Math-7B

### Model description
Maverick-Math is a Mistral Fine-tune, on top of math and code datasets and performs very well on benchmarks .
## 🏆 Evaluation
### gsm8k
|Tasks|Version| Filter |n-shot| Metric |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|-----:|---|-----:|
|gsm8k| 3|strict-match | 5|exact_match|0.7331|± |0.0122|
| | |flexible-extract| 5|exact_match|0.7400|± |0.0121|
### mathqa
|Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|------|------:|------|------|--------|-----:|---|-----:|
|mathqa| 1|none |None |acc |0.3591|± |0.0088|
| | |none |None |acc_norm|0.3635|± |0.0088| |
icefog72/Kunokukulemonchini-7b | icefog72 | 2024-04-11T01:25:09Z | 727 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"alpaca",
"conversational",
"base_model:grimjim/kukulemon-7B",
"base_model:Nitral-AI/Kunocchini-7b-128k-test",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-15T02:32:03Z | ---
base_model:
- grimjim/kukulemon-7B
- Nitral-AI/Kunocchini-7b-128k-test
library_name: transformers
tags:
- mergekit
- merge
- alpaca
- mistral
license: cc-by-nc-4.0
model-index:
- name: Kunokukulemonchini-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/Kunokukulemonchini-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.31
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/Kunokukulemonchini-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.31
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/Kunokukulemonchini-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.89
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/Kunokukulemonchini-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/Kunokukulemonchini-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.20
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/Kunokukulemonchini-7b
name: Open LLM Leaderboard
---
# Kunokukulemonchini-7b
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Here is an 4.1bpw exl2 quant [Kunokukulemonchini-7b-4.1bpw-exl2](https://huggingface.co/icefog72/Kunokukulemonchini-7b-4.1bpw-exl2) for people like me with 6gb vram.
Thx to Natkituwu for
- 3.5bpw [Kunokukulemonchini-7b-3.5bpw-exl2](https://huggingface.co/Natkituwu/Kunokukulemonchini-7b-3.5bpw-exl2)
- 5.0bpw [Kunokukulemonchini-7b-5.0bpw-exl2](https://huggingface.co/Natkituwu/Kunokukulemonchini-7b-5.0bpw-exl2)
- 6.5bpw [Kunokukulemonchini-7b-6.5bpw-exl2](https://huggingface.co/Natkituwu/Kunokukulemonchini-7b-6.5bpw-exl2)
- 7.1bpw [Kunokukulemonchini-7b-7.1bpw-exl2](https://huggingface.co/Natkituwu/Kunokukulemonchini-7b-7.1bpw-exl2)
- 8.0bpw [Kunokukulemonchini-7b-8.0bpw-exl2](https://huggingface.co/Natkituwu/Kunokukulemonchini-7b-8.0bpw-exl2)
## Advertisement
- Check out new merge model [IceLemonTeaRP-32k-7b](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b)
## Merge Details
Slightly edited kukulemon-7B config.json before merge to get at least ~32k context window.
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B)
* [Nitral-AI/Kunocchini-7b-128k-test](https://huggingface.co/Nitral-AI/Kunocchini-7b-128k-test)
## How to download, including from branches
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Kunokukulemonchini-7b`:
```shell
mkdir icefog72/Kunokukulemonchini-7b
huggingface-cli download icefog72/Kunokukulemonchini-7b --local-dir Kunokukulemonchini-7b --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir FOLDERNAME
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MODEL --local-dir FOLDERNAME --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: grimjim/kukulemon-7B
layer_range: [0, 32]
- model: Nitral-AI/Kunocchini-7b-128k-test
layer_range: [0, 32]
merge_method: slerp
base_model: Nitral-AI/Kunocchini-7b-128k-test
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_icefog72__Kunokukulemonchini-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.61|
|AI2 Reasoning Challenge (25-Shot)|66.72|
|HellaSwag (10-Shot) |86.31|
|MMLU (5-Shot) |65.31|
|TruthfulQA (0-shot) |61.89|
|Winogrande (5-shot) |78.45|
|GSM8k (5-shot) |60.20|
|
Gille/StrangeMerges_42-7B-dare_ties | Gille | 2024-03-18T19:29:04Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"rwitz/experiment26-truthy-iter-0",
"Kukedlc/Neural4gsm8k",
"Gille/StrangeMerges_30-7B-slerp",
"base_model:rwitz/experiment26-truthy-iter-0",
"base_model:Kukedlc/Neural4gsm8k",
"base_model:Gille/StrangeMerges_30-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-18T19:19:43Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- rwitz/experiment26-truthy-iter-0
- Kukedlc/Neural4gsm8k
- Gille/StrangeMerges_30-7B-slerp
base_model:
- rwitz/experiment26-truthy-iter-0
- Kukedlc/Neural4gsm8k
- Gille/StrangeMerges_30-7B-slerp
---
# StrangeMerges_42-7B-dare_ties
StrangeMerges_42-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [rwitz/experiment26-truthy-iter-0](https://huggingface.co/rwitz/experiment26-truthy-iter-0)
* [Kukedlc/Neural4gsm8k](https://huggingface.co/Kukedlc/Neural4gsm8k)
* [Gille/StrangeMerges_30-7B-slerp](https://huggingface.co/Gille/StrangeMerges_30-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: rwitz/experiment26-truthy-iter-0
parameters:
weight: 0.3
density: 0.8
- model: Kukedlc/Neural4gsm8k
parameters:
weight: 0.2
density: 0.8
- model: Gille/StrangeMerges_30-7B-slerp
parameters:
weight: 0.5
density: 0.8
base_model: Gille/StrangeMerges_41-7B-dare_ties
merge_method: dare_ties
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_42-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
abhishek/autotrain-ixpiv-6kj1e | abhishek | 2024-03-19T08:02:25Z | 727 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-19T07:46:18Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
Yuma42/KangalKhan-Beta-Sapphire-7B | Yuma42 | 2024-03-25T00:49:29Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"kaist-ai/mistral-orpo-capybara-7k",
"argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"conversational",
"en",
"base_model:kaist-ai/mistral-orpo-capybara-7k",
"base_model:argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T00:44:15Z | ---
tags:
- merge
- mergekit
- lazymergekit
- kaist-ai/mistral-orpo-capybara-7k
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
base_model:
- kaist-ai/mistral-orpo-capybara-7k
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
license: apache-2.0
language:
- en
---
# KangalKhan-Beta-Sapphire-7B
KangalKhan-Beta-Sapphire-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [kaist-ai/mistral-orpo-capybara-7k](https://huggingface.co/kaist-ai/mistral-orpo-capybara-7k)
* [argilla/distilabeled-OpenHermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-OpenHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: kaist-ai/mistral-orpo-capybara-7k
layer_range: [0, 32]
- model: argilla/distilabeled-OpenHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: kaist-ai/mistral-orpo-capybara-7k
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-Beta-Sapphire-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
M4-ai/Hercules-Mini-1.8B | M4-ai | 2024-03-30T15:18:49Z | 727 | 6 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:Locutusque/hercules-v4.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-29T00:06:01Z | ---
library_name: transformers
license: other
datasets:
- Locutusque/hercules-v4.0
language:
- en
inference:
parameters:
do_sample: true
temperature: 1
top_p: 0.7
top_k: 4
max_new_tokens: 250
repetition_penalty: 1.1
---
# Hercules-Mini-1.8B
<!-- Provide a quick summary of what the model is/does. -->
We fine-tuned Qwen1.5-1.8B on Locutusque's Hercules-v4.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model has capabilities in math, coding, function calling, roleplay, and more. We fine-tuned it using 700,000 examples of Hercules-v4.
- **Developed by:** M4-ai
- **Language(s) (NLP):** English and maybe Chinese
- **License:** tongyi-qianwen license
- **Finetuned from model:** [Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
General purpose assistant, question answering, chain-of-thought, etc..
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The eos token was not setup properly, so to prevent infinite generation you'll need to implement a stopping criteria when the model generates the <|im_end|> token.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## Evaluation
Coming soon
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Locutusque/hercules-v4.0
#### Training Hyperparameters
- **Training regime:** bf16 non-mixed precision
## Technical Specifications
#### Hardware
We used 8 Kaggle TPUs, and we trained at a global batch size of 256 and sequence length of 1536
## Contributions
Thanks to @Tonic, @aloobun, @fhai50032, and @Locutusque for their contributions to this model. |
jisukim8873/mistral-7B-alpaca-case-1-2 | jisukim8873 | 2024-04-01T04:27:30Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-01T03:24:51Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Aratako/c4ai-command-r-v01-japanese-instruct | Aratako | 2024-04-07T15:18:37Z | 727 | 4 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"ja",
"en",
"base_model:CohereForAI/c4ai-command-r-v01",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-04T03:56:52Z | ---
library_name: transformers
license: cc-by-nc-sa-4.0
language:
- ja
- en
base_model:
- CohereForAI/c4ai-command-r-v01
---
# c4ai-command-r-v01-japanese-instruct
[GGUF版はこちら/Click here for the GGUF version](https://huggingface.co/Aratako/c4ai-command-r-v01-japanese-instruct-GGUF)
## 概要
[CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)を、[ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF-%E5%85%AC%E9%96%8B/)を使って追加で日本語インストラクションチューニングを施したモデルです。
## 学習の設定
RunpodでGPUサーバを借り、A6000x4で学習を行いました。主な学習パラメータは以下の通りです。
- lora_r: 64
- lisa_alpha: 128
- lora_dropout: 0.05
- lora_target_modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
- learning_rate: 2e-5
- num_train_epochs: 10epochs
- batch_size: 50
- max_seq_length: 2048
## 評価
jsquad(jsquad-1.1-0.3, 2-shots)、jcommonsenseqa(jcommonsenseqa-1.1-0.3, 3-shots)、jnli(jnli-1.3-0.3, 3-shots)、marc_ja(marc_ja-1.1-0.3, 3-shots)結果は以下の通りです。
(8ビット量子化/jsquadは100で割り、それぞれ小数点以下第4位を四捨五入)
平均スコアの向上が見受けられます。
|Model|jsquad(exact_match)|jcommonsenseqa(acc)|jnli(acc)|marc_ja(acc)|average|
|---|---|---|---|---|---|
| c4ai-command-r-v01 |0.809 | 0.902 | 0.466 | 0.954 | 0.783 |
| **c4ai-command-r-v01-japanese-instruct** | **0.836** | **0.911** | **0.537** | **0.940** | **0.806** |
評価には[lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness)を利用しました。
また、元モデルと本モデルの[japanese-mt-bench](https://github.com/Stability-AI/FastChat/tree/jp-stable/fastchat/llm_judge)の結果は以下の通りです。(シングルターン、4ビット量子化)
スコア的にはあまり変わりませんでした。ただし、元モデルの出力の中には時々英語が混ざっていましたが、目視で確認した範囲だと混ざらなくなっていたので学習の効果は多少はありそうです。
|Model|Coding|Extraction|Humanities|Math|Reasoning|Roleplay|STEM|Writing|avg_score|
|---|---|---|---|---|---|---|---|---|---|
| c4ai-command-r-v01 | 6.1 | 7.9 | 9.7 | 2.4 | 6.0 | 8.3 | 9.8 | 8.6 | 7.35 |
| **c4ai-command-r-v01-japanese-instruct** | **5.6** | **8.3** | **8.1** | **3.4** | **6.1** | **7.9** | **9.2** | **8.6** | **7.15** |

余談ですが、英語ベンチマークである[Open LLM Leaderboard]((https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))のスコアもなぜかわずかに向上していました。
[詳細](https://huggingface.co/datasets/open-llm-leaderboard/details_Aratako__c4ai-command-r-v01-japanese-instruct)
| Metric |c4ai-command-r-v01|**c4ai-command-r-v01-japanese-instruct**|
|---------------------------------|----:|----:|
|Avg. |68.54|**68.85**|
|AI2 Reasoning Challenge (25-Shot)|65.53|**65.87**|
|HellaSwag (10-Shot) |87|**85.62**|
|MMLU (5-Shot) |68.2|**67.61**|
|TruthfulQA (0-shot) |52.32|**51.01**|
|Winogrande (5-shot) |81.53|**82.95**|
|GSM8k (5-shot) |56.63|**60.05**|
## ライセンス
元モデルである[CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)はCC-BY-NC 4.0と[C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)のもとに配布されています。
また、ファインチューニングに利用したデータセットである[ichikara-instruction](https://liat-aip.sakura.ne.jp/wp/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF%E4%BD%9C%E6%88%90/llm%E3%81%AE%E3%81%9F%E3%82%81%E3%81%AE%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%A4%E3%83%B3%E3%82%B9%E3%83%88%E3%83%A9%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%87%E3%83%BC%E3%82%BF-%E5%85%AC%E9%96%8B/)はCC-BY-NC-SA 4.0の元配布されています。
そのため、このモデルのライセンスはCC-BY-NC-SA 4.0及び[C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)となります。(理解間違っていればご指摘ください) |
saucam/Nereus-7B | saucam | 2024-04-10T05:56:19Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"NousResearch/Hermes-2-Pro-Mistral-7B",
"conversational",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-04T08:36:43Z | ---
tags:
- merge
- mergekit
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
- NousResearch/Hermes-2-Pro-Mistral-7B
base_model:
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
license: apache-2.0
---

# 🌊 Nereus-7B
Nereus-7B excels at conversations, coding, and tasks that require structured output in JSON.
It is a merge of the following models using [mergekit](https://github.com/arcee-ai/mergekit):
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
## 🧩 Configuration
```yamlmodels:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
parameters:
density: 0.5
weight: 0.4
# No parameters necessary for base model
- model: NousResearch/Hermes-2-Pro-Mistral-7B
parameters:
density: 0.5
weight: 0.6
merge_method: dare_ties
base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
parameters:
int8_mask: true
dtype: bfloat16
```
## Eval Results
|Benchmark| Model |agieval|gpt4all|bigbench|truthfulqa|Average|
|---------|----------------------------------------------------|------:|------:|-------:|---------:|------:|
|nous |[Nereus-7B](https://huggingface.co/saucam/Nereus-7B)| 42.8| 72.21| 39.17| 54.32| 52.12|
|Benchmark| Model |winogrande| arc |gsm8k|mmlu|truthfulqa|hellaswag|Average|
|---------|----------------------------------------------------|---------:|----:|----:|---:|---------:|--------:|------:|
|openllm |[Nereus-7B](https://huggingface.co/saucam/Nereus-7B)| 76.95|62.54|46.25|59.6| 54.32| 83.23| 63.82|
For detailed results [see here](https://github.com/saucam/model_evals/blob/main/saucam/Nereus-7B/README.md)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "saucam/Nereus-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
Sample responses
```
What is a large language model?<|im_end|>
<|im_start|>assistant
A large language model is a type of artificial intelligence (AI) system that has been trained on vast amounts of text data to understand, generate, and predict patterns in human language. It is designed to process and analyze natural language input, making it capable of tasks such as text generation, translation, language translation, and text classification. These models are typically based on deep learning techniques, particularly neural networks, and are trained on large datasets, often consisting of billions of words. Some well-known large language models include GPT-3 by OpenAI, BERT by Google, and T5 by Google. These models can be fine-tuned for specific tasks or domains to improve their performance. They have revolutionized the field of natural language processing and have numerous applications in areas such as chatbots, search engines, and automated writing assistance.
``` |
ShenaoZhang/0.001_idpo_iter_1 | ShenaoZhang | 2024-04-05T10:29:26Z | 727 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-05T09:41:56Z | ---
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: 0.001_idpo_iter_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_idpo_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
LiteLLMs/c4ai-command-r-plus-GGUF | LiteLLMs | 2024-05-25T03:22:33Z | 727 | 1 | transformers | [
"transformers",
"gguf",
"GGUF",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-18T00:50:43Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
license: cc-by-nc-4.0
library_name: transformers
tags:
- GGUF
quantized_by: andrijdavid
---
# c4ai-command-r-plus-GGUF
- Original model: [c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus)
<!-- description start -->
## Description
This repo contains GGUF format model files for [c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/c4ai-command-r-plus-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/c4ai-command-r-plus-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/c4ai-command-r-plus-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/c4ai-command-r-plus-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: c4ai-command-r-plus
# Model Card for C4AI Command R+
🚨 **This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit)**.
## Model Summary
C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering.
C4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is [C4AI Command R](https://huggingface.co/CohereForAI/c4ai-command-r-v01)
Developed by: [Cohere](https://cohere.com/) and [Cohere For AI](https://cohere.for.ai)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: c4ai-command-r-plus
- Model Size: 104 billion parameters
- Context length: 128K
**Try C4AI Command R+**
You can try out C4AI Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus).
**Usage**
Please install `transformers` from the source repository that includes the necessary changes for this model.
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
**Quantized model through bitsandbytes, 8-bit precision**
```python
# pip install 'git+https://github.com/huggingface/transformers.git' bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
**Quantized model through bitsandbytes, 4-bit precision**
This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.
**Languages covered**: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.
Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.
**Context length**: Command R+ supports a context length of 128K.
## Evaluations
Command R+ has been submitted to the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a [standardized way](https://github.com/EleutherAI/lm-evaluation-harness) using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way.
| Model | Average | Arc (Challenge) | Hella Swag | MMLU | Truthful QA | Winogrande | GSM8k |
| : | -: | -: |
| **CohereForAI/c4ai-command-r-plus** | 74.6 | 70.99 | 88.6 | 75.7 | 56.3 | 85.4 | 70.7 |
| [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct) | 74.5 | 68.9 | 89 | 73.7 | 66.9 | 81.8 | 66.9 |
| [Mixtral 8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.7 | 70.1 | 87.6 | 71.4 | 65 | 81.1 | 61.1 |
| [Mixtral 8x7B Chat](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.6 | 70.2 | 87.6 | 71.2 | 64.6 | 81.4 | 60.7 |
| [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) | 68.5 | 65.5 | 87 | 68.2 | 52.3 | 81.5 | 56.6 |
| [Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf) | 67.9 | 67.3 | 87.3 | 69.8 | 44.9 | 83.7 | 54.1 |
| [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 65.3 | 65.4 | 84.2 | 74.9 | 55.4 | 80.1 | 31.9 |
| [Gemma-7B](https://huggingface.co/google/gemma-7b) | 63.8 | 61.1 | 82.2 | 64.6 | 44.8 | 79 | 50.9 |
| [LLama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | 62.4 | 64.6 | 85.9 | 63.9 | 52.8 | 80.5 | 26.7 |
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 61 | 60 | 83.3 | 64.2 | 42.2 | 78.4 | 37.8 |
We include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more [here](https://txt.cohere.com/command-r-plus-microsoft-azure/). For evaluation of open ended generation, Command R+ is currently being evaluated on the [chatbot arena](https://chat.lmsys.org/).
### Tool use & multihop capabilities:
Command R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation.
Command R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once.
The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions.
We recommend including the `directly_answer` tool, but it can be removed or renamed if required.
Comprehensive documentation for working with command R+'s tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary>
```python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# Define tools available for the model to use:
tools = [
{
"name": "internet_search",
"description": "Returns a list of relevant document snippets for a textual query retrieved from the internet",
"parameter_definitions": {
"query": {
"description": "Query to search the internet with",
"type": 'str',
"required": True
}
}
},
{
'name': "directly_answer",
"description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history",
'parameter_definitions': {}
}
]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_tool_use_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
print(tool_use_prompt)
```
</details>
<details>
<summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary>
````
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.
## Available Tools
Here is a list of tools that you have available to you:
```python
def internet_search(query: str) -> List[Dict]:
"""Returns a list of relevant document snippets for a textual query retrieved from the internet
Args:
query (str): Query to search the internet with
"""
pass
```
```python
def directly_answer() -> List[Dict]:
"""Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
"""
pass
```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example:
```json
[
{
"tool_name": title of the tool in the specification,
"parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters
}
]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary>
````
Action: ```json
[
{
"tool_name": "internet_search",
"parameters": {
"query": "biggest penguin in the world"
}
}
]
```
````
</details>
### Grounded Generation and RAG Capabilities:
Command R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.
Command R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.
By default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation.
The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.
Comprehensive documentation for working with Command R+'s grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary>
````python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-plus"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# define documents to ground on:
documents = [
{ "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." },
{ "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."}
]
# render the tool use prompt as a string:
grounded_generation_prompt = tokenizer.apply_grounded_generation_template(
conversation,
documents=documents,
citation_mode="accurate", # or "fast"
tokenize=False,
add_generation_prompt=True,
)
print(grounded_generation_prompt)
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary>
````<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results>
Document: 0
title: Tall penguins
text: Emperor penguins are the tallest growing up to 122 cm in height.
Document: 1
title: Penguin habitats
text: Emperor penguins only live in Antarctica.
</results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line.
Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'.
Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'.
Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup.
Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary>
````
Relevant Documents: 0,1
Cited Documents: 0,1
Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres.
Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0>
````
</details>
### Code Capabilities:
Command R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.
### Model Card Contact
For errors or additional questions about details in this model card, contact [[email protected]](mailto:[email protected]).
### Terms of Use:
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try Chat:
You can try Command R+ chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus).
<!-- original-model-card end -->
|
RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf | RichardErkhov | 2024-05-31T05:43:10Z | 727 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-05-31T00:41:39Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-7B-german-assistant-v4 - GGUF
- Model creator: https://huggingface.co/flozi00/
- Original model: https://huggingface.co/flozi00/Mistral-7B-german-assistant-v4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mistral-7B-german-assistant-v4.Q2_K.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q2_K.gguf) | Q2_K | 2.53GB |
| [Mistral-7B-german-assistant-v4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Mistral-7B-german-assistant-v4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Mistral-7B-german-assistant-v4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Mistral-7B-german-assistant-v4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Mistral-7B-german-assistant-v4.Q3_K.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q3_K.gguf) | Q3_K | 3.28GB |
| [Mistral-7B-german-assistant-v4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Mistral-7B-german-assistant-v4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Mistral-7B-german-assistant-v4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Mistral-7B-german-assistant-v4.Q4_0.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Mistral-7B-german-assistant-v4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Mistral-7B-german-assistant-v4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Mistral-7B-german-assistant-v4.Q4_K.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q4_K.gguf) | Q4_K | 4.07GB |
| [Mistral-7B-german-assistant-v4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Mistral-7B-german-assistant-v4.Q4_1.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Mistral-7B-german-assistant-v4.Q5_0.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Mistral-7B-german-assistant-v4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Mistral-7B-german-assistant-v4.Q5_K.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q5_K.gguf) | Q5_K | 4.78GB |
| [Mistral-7B-german-assistant-v4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Mistral-7B-german-assistant-v4.Q5_1.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Mistral-7B-german-assistant-v4.Q6_K.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q6_K.gguf) | Q6_K | 3.56GB |
| [Mistral-7B-german-assistant-v4.Q8_0.gguf](https://huggingface.co/RichardErkhov/flozi00_-_Mistral-7B-german-assistant-v4-gguf/blob/main/Mistral-7B-german-assistant-v4.Q8_0.gguf) | Q8_0 | 3.14GB |
Original model description:
---
datasets:
- flozi00/conversations
language:
- de
---
## This project is sponsored by [  ](https://www.primeline-solutions.com/de/server/nach-einsatzzweck/gpu-rendering-hpc/)
# Model Card
This model is an finetuned version for german instructions and conversations in style of Alpaca. "### Assistant:" "### User:", trained with a context length of 8k tokens.
The dataset used is deduplicated and cleaned, with no codes inside and uncensored. The focus is on instruction following and conversational tasks.
The model archictecture is based on Mistral v0.1 with 7B parameters, trained on 100% renewable energy powered hardware.
This work is contributed by private research of [flozi00](https://huggingface.co/flozi00)
|
RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf | RichardErkhov | 2024-06-17T04:13:44Z | 727 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-17T02:22:31Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
KoSOLAR-10.9B-v0.5 - GGUF
- Model creator: https://huggingface.co/rrw-x2/
- Original model: https://huggingface.co/rrw-x2/KoSOLAR-10.9B-v0.5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [KoSOLAR-10.9B-v0.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q2_K.gguf) | Q2_K | 3.8GB |
| [KoSOLAR-10.9B-v0.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.IQ3_XS.gguf) | IQ3_XS | 4.22GB |
| [KoSOLAR-10.9B-v0.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.IQ3_S.gguf) | IQ3_S | 4.45GB |
| [KoSOLAR-10.9B-v0.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q3_K_S.gguf) | Q3_K_S | 4.42GB |
| [KoSOLAR-10.9B-v0.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.IQ3_M.gguf) | IQ3_M | 4.59GB |
| [KoSOLAR-10.9B-v0.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q3_K.gguf) | Q3_K | 4.92GB |
| [KoSOLAR-10.9B-v0.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q3_K_M.gguf) | Q3_K_M | 4.92GB |
| [KoSOLAR-10.9B-v0.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q3_K_L.gguf) | Q3_K_L | 5.34GB |
| [KoSOLAR-10.9B-v0.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.IQ4_XS.gguf) | IQ4_XS | 5.51GB |
| [KoSOLAR-10.9B-v0.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q4_0.gguf) | Q4_0 | 5.74GB |
| [KoSOLAR-10.9B-v0.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.IQ4_NL.gguf) | IQ4_NL | 5.8GB |
| [KoSOLAR-10.9B-v0.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q4_K_S.gguf) | Q4_K_S | 5.78GB |
| [KoSOLAR-10.9B-v0.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q4_K.gguf) | Q4_K | 6.1GB |
| [KoSOLAR-10.9B-v0.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q4_K_M.gguf) | Q4_K_M | 6.1GB |
| [KoSOLAR-10.9B-v0.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q4_1.gguf) | Q4_1 | 6.36GB |
| [KoSOLAR-10.9B-v0.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q5_0.gguf) | Q5_0 | 6.98GB |
| [KoSOLAR-10.9B-v0.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q5_K_S.gguf) | Q5_K_S | 6.98GB |
| [KoSOLAR-10.9B-v0.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q5_K.gguf) | Q5_K | 7.17GB |
| [KoSOLAR-10.9B-v0.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q5_K_M.gguf) | Q5_K_M | 7.17GB |
| [KoSOLAR-10.9B-v0.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q5_1.gguf) | Q5_1 | 7.6GB |
| [KoSOLAR-10.9B-v0.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q6_K.gguf) | Q6_K | 8.3GB |
| [KoSOLAR-10.9B-v0.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/rrw-x2_-_KoSOLAR-10.9B-v0.5-gguf/blob/main/KoSOLAR-10.9B-v0.5.Q8_0.gguf) | Q8_0 | 10.75GB |
Original model description:
---
language:
- ko
base_model:
- LDCC/LDCC-SOLAR-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
tags:
- mergekit
- merge
- LDCC/LDCC-SOLAR-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [hyeogi/SOLAR-10.7B-dpo-v1](https://huggingface.co/hyeogi/SOLAR-10.7B-dpo-v1)
* [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: LDCC/LDCC-SOLAR-10.7B
layer_range: [0, 48]
- model: hyeogi/SOLAR-10.7B-dpo-v1
layer_range: [0, 48]
merge_method: slerp
tokenizer_source: base
base_model: LDCC/LDCC-SOLAR-10.7B
embed_slerp: true
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## Datasets
Finetuned using LoRA with [kyujinpy/OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO)
|
XeTute/Atheria-V0.1 | XeTute | 2024-07-02T21:50:16Z | 727 | 1 | null | [
"gguf",
"chemistry",
"biology",
"legal",
"music",
"art",
"code",
"finance",
"medical",
"general",
"base",
"base model",
"text-generation",
"en",
"dataset:XeTute/Conversational-Small",
"license:other",
"region:us"
] | text-generation | 2024-06-29T21:20:03Z | ---
license: other
license_name: xt-aurora-license
license_link: https://huggingface.co/XeTute/AURORA-OpenBeta-V0.5-GGUF/blob/main/LICENSE
datasets:
- XeTute/Conversational-Small
language:
- en
pipeline_tag: text-generation
tags:
- chemistry
- biology
- legal
- music
- art
- code
- finance
- medical
- general
- base
- base model
---
Note:
We're looking for funds, it is getting harder to keep our free-inference up.
We've been serving a lot of GPU time to KoboldAI, we've trained our LLMs on CPUs, because our GPUs can't handle it.
We're looking for funds to replace our two GTX1060 3GBs in order to provide better, faster inference and train models more efficiently.
Any help is appreciated:
- PayPal: https://www.paypal.com/paypalme/XeTute
- Ko-Fi: https://ko-fi.com/xetute
Note about this model: We're abandoning this model, because we got no money left to make a model this large perform even average on most basic Tasks.
If you want to support us, consider donating on the links above. This will help us create models from scratch, assuring they perform best for what they are built.

Before you download this model, you can try it out on our website, for free, without any login, etc.
The inference may be slow, you can support us by donating on the links above.
Try out Atheria on:
- Our website: https://xetute.neocities.org/
- KoboldAI Lite[You have to configure stuff]: https://lite.koboldai.net/
The inference on our website is faster since it skips the waitinglist.
About this model:
- Name: Atheria
- Version: 0.1
- IsStable: No
- IsUsable: Yes
- Param Count: ~7B.
- Type: Text-Generation
- Finetuned on: DeepCode-7B-AURORA.
- GGUF Quant: Q8.
Scope of use:
- Math
- Basic Coding
- Reasoning
- NLP
- Basic roleplaying
- General Q & A.
- Private use
Out of scope use:
- Illegal Q & A
- Production
The prompt format used is Vicuna. The model may make more mistakes than expected, we will fix this when we get the newer GPUs.
Check out our
- GitHub: https://github.com/N0CTRON
- X account: https://x.com/XeTute
- HuggingFace: https://huggingface.co/XeTute
- Our website / subdomain on Neocities: https://xetute.neocities.org/
We wish you a rememberable chat with Atheria! |
Chun/DialoGPT-large-dailydialog | Chun | 2021-08-08T22:31:47Z | 726 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | Entry not found |
facebook/wav2vec2-large-100k-voxpopuli | facebook | 2021-11-05T12:45:52Z | 726 | 4 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"multilingual",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: multilingual
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Large-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained on the 100k unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
flax-community/bert-swahili-news-classification | flax-community | 2023-07-15T14:21:05Z | 726 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"sw",
"dataset:flax-community/swahili-safi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: sw
widget:
- text: "Idris ameandika kwenye ukurasa wake wa Instagram akimkumbusha Diamond kutekeleza ahadi yake kumpigia Zari magoti kumuomba msamaha kama alivyowahi kueleza awali.Idris ameandika;"
datasets:
- flax-community/swahili-safi
---
## Swahili News Classification with BERT
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
This [model](https://huggingface.co/flax-community/bert-base-uncased-swahili) was used as the base and fine-tuned for this task.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("flax-community/bert-swahili-news-classification")
model = AutoModelForSequenceClassification.from_pretrained("flax-community/bert-swahili-news-classification")
```
```
Eval metrics (10% valid set): {'accuracy': 0.9114740008594757}
```
|
huggingartists/travis-scott | huggingartists | 2021-09-10T19:40:02Z | 726 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/travis-scott",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/travis-scott
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/5d19fecdb3828ca9ec89dda588e2eb7d.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Travis Scott</div>
<a href="https://genius.com/artists/travis-scott">
<div style="text-align: center; font-size: 14px;">@travis-scott</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Travis Scott.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/travis-scott).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/travis-scott")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1ezlbvd0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Travis Scott's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2w91gglb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2w91gglb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/travis-scott')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/travis-scott")
model = AutoModelWithLMHead.from_pretrained("huggingartists/travis-scott")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
timm/levit_192.fb_dist_in1k | timm | 2024-02-10T23:30:37Z | 726 | 0 | timm | [
"timm",
"pytorch",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.01136",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-02-03T21:13:29Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for levit_192.fb_dist_in1k
A LeViT image classification model using convolutional mode (using nn.Conv2d and nn.BatchNorm2d). Pretrained on ImageNet-1k using distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 10.9
- GMACs: 0.7
- Activations (M): 3.2
- Image size: 224 x 224
- **Papers:**
- LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference: https://arxiv.org/abs/2104.01136
- **Original:** https://github.com/facebookresearch/LeViT
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('levit_192.fb_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'levit_192.fb_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
|model |top1 |top5 |param_count|img_size|
|-----------------------------------|------|------|-----------|--------|
|levit_384.fb_dist_in1k |82.596|96.012|39.13 |224 |
|levit_conv_384.fb_dist_in1k |82.596|96.012|39.13 |224 |
|levit_256.fb_dist_in1k |81.512|95.48 |18.89 |224 |
|levit_conv_256.fb_dist_in1k |81.512|95.48 |18.89 |224 |
|levit_conv_192.fb_dist_in1k |79.86 |94.792|10.95 |224 |
|levit_192.fb_dist_in1k |79.858|94.792|10.95 |224 |
|levit_128.fb_dist_in1k |78.474|94.014|9.21 |224 |
|levit_conv_128.fb_dist_in1k |78.474|94.02 |9.21 |224 |
|levit_128s.fb_dist_in1k |76.534|92.864|7.78 |224 |
|levit_conv_128s.fb_dist_in1k |76.532|92.864|7.78 |224 |
## Citation
```bibtex
@InProceedings{Graham_2021_ICCV,
author = {Graham, Benjamin and El-Nouby, Alaaeldin and Touvron, Hugo and Stock, Pierre and Joulin, Armand and Jegou, Herve and Douze, Matthijs},
title = {LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {12259-12269}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
ai-forever/mGPT-1.3B-romanian | ai-forever | 2023-08-11T08:02:28Z | 726 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"gpt3",
"mgpt",
"ro",
"en",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-10T05:11:57Z | ---
language:
- ro
- en
- ru
license: mit
tags:
- gpt3
- transformers
- mgpt
---
# 🇷🇴 Romanian mGPT 1.3B
Language model for Romanian. Model has 1.3B parameters as you can guess from it's name.
Romanian belongs to Indo-European language family. It's a very lyrical language with approximately 24 million speakers. Here are some facts about it:
1. It is a Romance language, closely related to Italian, French, Spanish, Portuguese, and Catalan.
2. It retains several Latin characteristics, making it unique among the Romance languages.
3. While primarily spoken in Romania and Moldova, there are also Romanian speakers in neighboring countries and diaspora communities worldwide.
## Technical details
It's one of the models derived from the base [mGPT-XL (1.3B)](https://huggingface.co/ai-forever/mGPT) model (see the list below) which was originally trained on the 61 languages from 25 language families using Wikipedia and C4 corpus.
We've found additional data for 23 languages most of which are considered as minor and decided to further tune the base model. **Romanian mGPT 1.3B** was trained for another 5000 steps with batch_size=4 and context window of **2048** tokens on 1 A100.
Final perplexity for this model on validation is **3.44**.
_Chart of the training loss and perplexity:_

## Other mGPT-1.3B models
- [🇦🇲 mGPT-1.3B Armenian](https://huggingface.co/ai-forever/mGPT-1.3B-armenian)
- [🇦🇿 mGPT-1.3B Azerbaijan](https://huggingface.co/ai-forever/mGPT-1.3B-azerbaijan)
- [🍯 mGPT-1.3B Bashkir](https://huggingface.co/ai-forever/mGPT-1.3B-bashkir)
- [🇧🇾 mGPT-1.3B Belorussian](https://huggingface.co/ai-forever/mGPT-1.3B-belorussian)
- [🇧🇬 mGPT-1.3B Bulgarian](https://huggingface.co/ai-forever/mGPT-1.3B-bulgarian)
- [🌞 mGPT-1.3B Buryat](https://huggingface.co/ai-forever/mGPT-1.3B-buryat)
- [🌳 mGPT-1.3B Chuvash](https://huggingface.co/ai-forever/mGPT-1.3B-chuvash)
- [🇬🇪 mGPT-1.3B Georgian](https://huggingface.co/ai-forever/mGPT-1.3B-georgian)
- [🌸 mGPT-1.3B Kalmyk](https://huggingface.co/ai-forever/mGPT-1.3B-kalmyk)
- [🇰🇿 mGPT-1.3B Kazakh](https://huggingface.co/ai-forever/mGPT-1.3B-kazakh)
- [🇰🇬 mGPT-1.3B Kirgiz](https://huggingface.co/ai-forever/mGPT-1.3B-kirgiz)
- [🐻 mGPT-1.3B Mari](https://huggingface.co/ai-forever/mGPT-1.3B-mari)
- [🇲🇳 mGPT-1.3B Mongol](https://huggingface.co/ai-forever/mGPT-1.3B-mongol)
- [🐆 mGPT-1.3B Ossetian](https://huggingface.co/ai-forever/mGPT-1.3B-ossetian)
- [🇮🇷 mGPT-1.3B Persian](https://huggingface.co/ai-forever/mGPT-1.3B-persian)
- [🇹🇯 mGPT-1.3B Tajik](https://huggingface.co/ai-forever/mGPT-1.3B-tajik)
- [☕ mGPT-1.3B Tatar](https://huggingface.co/ai-forever/mGPT-1.3B-tatar)
- [🇹🇲 mGPT-1.3B Turkmen](https://huggingface.co/ai-forever/mGPT-1.3B-turkmen)
- [🐎 mGPT-1.3B Tuvan](https://huggingface.co/ai-forever/mGPT-1.3B-tuvan)
- [🇺🇦 mGPT-1.3B Ukranian](https://huggingface.co/ai-forever/mGPT-1.3B-ukranian)
- [🇺🇿 mGPT-1.3B Uzbek](https://huggingface.co/ai-forever/mGPT-1.3B-uzbek)
- [💎 mGPT-1.3B Yakut](https://huggingface.co/ai-forever/mGPT-1.3B-yakut)
## Feedback
If you'll found a bug of have additional data to train model on your language — please, give us feedback.
Model will be improved over time. Stay tuned!
|
liminerity/Blur-7b-v1.21 | liminerity | 2024-03-11T18:30:17Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"udkai/Turdus",
"decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP",
"liminerity/Blur-7b-v1.2",
"base_model:udkai/Turdus",
"base_model:decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP",
"base_model:liminerity/Blur-7b-v1.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-18T04:30:23Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- udkai/Turdus
- decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
- liminerity/Blur-7b-v1.2
base_model:
- udkai/Turdus
- decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
- liminerity/Blur-7b-v1.2
model-index:
- name: Blur-7b-v1.21
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.21
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.21
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.21
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.99
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.21
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.21
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.21
name: Open LLM Leaderboard
---
# Blur-7b-v1.21
Blur-7b-v1.21 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
* [decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP](https://huggingface.co/decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP)
* [liminerity/Blur-7b-v1.2](https://huggingface.co/liminerity/Blur-7b-v1.2)
## 🧩 Configuration
```yaml
models:
- model: udkai/Turdus
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: liminerity/Blur-7b-v1.2
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: fblgit/UNA-TheBeagle-7b-v1
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Blur-7b-v1.21"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__Blur-7b-v1.21)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.18|
|AI2 Reasoning Challenge (25-Shot)|70.82|
|HellaSwag (10-Shot) |88.07|
|MMLU (5-Shot) |64.85|
|TruthfulQA (0-shot) |67.99|
|Winogrande (5-shot) |83.82|
|GSM8k (5-shot) |69.52|
|
Cartinoe5930/DARE-Merging | Cartinoe5930 | 2024-01-23T13:31:24Z | 726 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:openchat/openchat-3.5-0106",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:WizardLM/WizardMath-7B-V1.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-23T12:40:00Z | ---
base_model:
- openchat/openchat-3.5-0106
- mistralai/Mistral-7B-Instruct-v0.2
- Open-Orca/Mistral-7B-OpenOrca
- WizardLM/WizardMath-7B-V1.1
tags:
- mergekit
- merge
license: apache-2.0
---
# result
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) as a base.
### Models Merged
The following models were included in the merge:
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.2
# No parameters necessary for base model
- model: Open-Orca/Mistral-7B-OpenOrca
parameters:
density: 0.5
weight: 0.3
- model: openchat/openchat-3.5-0106
parameters:
density: 0.5
weight: 0.3
- model: WizardLM/WizardMath-7B-V1.1
parameters:
density: 0.5
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
normalize: true
dtype: float16
``` |
cloudyu/Truthful_DPO_TomGrc_FusionNet_34Bx2_MoE | cloudyu | 2024-01-25T22:10:48Z | 726 | 4 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"DPO",
"RL-TUNED",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T06:41:16Z | ---
license: mit
tags:
- moe
- DPO
- RL-TUNED
---
* [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer) with dataset jondurbin/truthy-dpo-v0.1 to improve [TomGrc/FusionNet_34Bx2_MoE]
```
DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
```
* Metrics improved by DPO

|
luqmanxyz/FrankenVillain-7B-v1 | luqmanxyz | 2024-03-04T14:33:08Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"jeonsworld/CarbonVillain-en-10.7B-v1",
"conversational",
"base_model:jeonsworld/CarbonVillain-en-10.7B-v1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T00:44:23Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- jeonsworld/CarbonVillain-en-10.7B-v1
- jeonsworld/CarbonVillain-en-10.7B-v1
base_model:
- jeonsworld/CarbonVillain-en-10.7B-v1
- jeonsworld/CarbonVillain-en-10.7B-v1
model-index:
- name: FrankenVillain-7B-v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 42.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/FrankenVillain-7B-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 51.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/FrankenVillain-7B-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 48.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/FrankenVillain-7B-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.19
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/FrankenVillain-7B-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/FrankenVillain-7B-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=luqmanxyz/FrankenVillain-7B-v1
name: Open LLM Leaderboard
---
# FrankenVillain-7B-v1
FrankenVillain-7B-v1 is a Franken merge of the following models using [mergekit](https://github.com/cg123/mergekit
* [jeonsworld/CarbonVillain-en-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v1)
* [jeonsworld/CarbonVillain-en-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: jeonsworld/CarbonVillain-en-10.7B-v1
layer_range: [0, 24]
- sources:
- model: jeonsworld/CarbonVillain-en-10.7B-v1
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "luqmanxyz/FrankenVillain-7B-v1"
messages = [{"role": "user", "content": "What are the 3 planets closest to the sun"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_luqmanxyz__FrankenVillain-7B-v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |45.34|
|AI2 Reasoning Challenge (25-Shot)|42.75|
|HellaSwag (10-Shot) |51.52|
|MMLU (5-Shot) |48.60|
|TruthfulQA (0-shot) |56.19|
|Winogrande (5-shot) |73.01|
|GSM8k (5-shot) | 0.00|
|
Lvxy1117/amber_fine_tune_001 | Lvxy1117 | 2024-01-28T12:45:36Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T06:51:17Z | ---
license: apache-2.0
language:
- en
datasets:
- WizardLM/WizardLM_evol_instruct_V2_196k
---
# Model Card for Lvxy1117/amber_fine_tune_001
<!-- Provide a quick summary of what the model is/does. -->
It is a test fine_tune model from LLM360/amber.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Gille/StrangeMerges_13-7B-slerp | Gille | 2024-03-04T21:54:39Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_12-7B-slerp",
"uukuguy/speechless-zephyr-code-functionary-7b",
"base_model:Gille/StrangeMerges_12-7B-slerp",
"base_model:uukuguy/speechless-zephyr-code-functionary-7b",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-31T00:09:19Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_12-7B-slerp
- uukuguy/speechless-zephyr-code-functionary-7b
base_model:
- Gille/StrangeMerges_12-7B-slerp
- uukuguy/speechless-zephyr-code-functionary-7b
model-index:
- name: StrangeMerges_13-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_13-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_13-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_13-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 48.62
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_13-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_13-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_13-7B-slerp
name: Open LLM Leaderboard
---
# StrangeMerges_13-7B-slerp
StrangeMerges_13-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_12-7B-slerp](https://huggingface.co/Gille/StrangeMerges_12-7B-slerp)
* [uukuguy/speechless-zephyr-code-functionary-7b](https://huggingface.co/uukuguy/speechless-zephyr-code-functionary-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Gille/StrangeMerges_12-7B-slerp
layer_range: [0, 32]
- model: uukuguy/speechless-zephyr-code-functionary-7b
layer_range: [0, 32]
merge_method: slerp
base_model: Gille/StrangeMerges_12-7B-slerp
parameters:
t:
- filter: self_attn
value: [0.1, 0.5, 0.3, 0.7, 0.9]
- filter: mlp
value: [0.9, 0.5, 0.7, 0.3, 0.1]
- value: 0.45
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_13-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_13-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |66.06|
|AI2 Reasoning Challenge (25-Shot)|63.82|
|HellaSwag (10-Shot) |84.95|
|MMLU (5-Shot) |64.90|
|TruthfulQA (0-shot) |48.62|
|Winogrande (5-shot) |79.87|
|GSM8k (5-shot) |54.21|
|
ewqr2130/llama_ppo_1e6step_4000 | ewqr2130 | 2024-01-31T18:05:12Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-31T17:56:45Z | ---
license: apache-2.0
---
ewqr2130
/
llama_ppo_1e6step_4000
like
0
Text Generation
Transformers
Safetensors
llama
Inference Endpoints
text-generation-inference
ewqr2130
/
llama_ppo_1e6step_4000
like
0
Text Generation
Transformers
Safetensors
llama
Inference Endpoints
text-generation-inference
ewqr2130
/
llama_ppo_1e6step_4000
like
0
Text Generation
Transformers
Safetensors
llama
Inference Endpoints
text-generation-inference
ewqr2130
/
llama_ppo_1e6step_4000
like
0
Text Generation
Transformers
Safetensors
llama
Inference Endpoints
text-generation-inference |
ConvexAI/Solutus-3x7B | ConvexAI | 2024-03-04T16:34:35Z | 726 | 7 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-02T23:36:17Z | ---
language:
- en
license: apache-2.0
tags:
- moe
- merge
model-index:
- name: Solutus-3x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Solutus-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.31
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Solutus-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Solutus-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.52
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Solutus-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Solutus-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.83
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Solutus-3x7B
name: Open LLM Leaderboard
---

# ConvexAI/Solutus-3x7B
A model to test how MoE will route without square expansion.
[Join our Discord!](https://discord.gg/rJXGjmxqzS)
The config looks like this...(detailed version is in the files and versions):
- [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser) - base
- [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) - expert #1
- [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser) - expert #2
- [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) - expert #3
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.

## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ConvexAI__Solutus-3x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.35|
|AI2 Reasoning Challenge (25-Shot)|72.01|
|HellaSwag (10-Shot) |88.31|
|MMLU (5-Shot) |64.77|
|TruthfulQA (0-shot) |67.52|
|Winogrande (5-shot) |83.66|
|GSM8k (5-shot) |69.83|
|
jsfs11/HighdensityRPMerge-7B | jsfs11 | 2024-03-04T12:17:53Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Silicon-Maid-7B",
"chargoddard/loyal-piano-m7-cdpo",
"jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES",
"NeverSleep/Noromaid-7b-v0.2",
"athirdpath/NSFW_DPO_vmgb-7b",
"base_model:SanjiWatsuki/Silicon-Maid-7B",
"base_model:chargoddard/loyal-piano-m7-cdpo",
"base_model:jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES",
"base_model:NeverSleep/Noromaid-7b-v0.2",
"base_model:athirdpath/NSFW_DPO_vmgb-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-03T09:56:08Z | ---
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Silicon-Maid-7B
- chargoddard/loyal-piano-m7-cdpo
- jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES
- NeverSleep/Noromaid-7b-v0.2
- athirdpath/NSFW_DPO_vmgb-7b
base_model:
- SanjiWatsuki/Silicon-Maid-7B
- chargoddard/loyal-piano-m7-cdpo
- jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES
- NeverSleep/Noromaid-7b-v0.2
- athirdpath/NSFW_DPO_vmgb-7b
license: apache-2.0
---
# HighdensityRPMerge-7B
HighdensityRPMerge-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
* [chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo)
* [jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES](https://huggingface.co/jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES)
* [NeverSleep/Noromaid-7b-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2)
* [athirdpath/NSFW_DPO_vmgb-7b](https://huggingface.co/athirdpath/NSFW_DPO_vmgb-7b)
## 🧩 Configuration
```yaml
models:
- model: saishf/West-Hermes-7B
# no parameters necessary for base model
- model: SanjiWatsuki/Silicon-Maid-7B
parameters:
weight: 0.4
density: 0.8
- model: chargoddard/loyal-piano-m7-cdpo
parameters:
weight: 0.3
density: 0.8
- model: jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES
parameters:
weight: 0.25
density: 0.45
- model: NeverSleep/Noromaid-7b-v0.2
parameters:
weight: 0.25
density: 0.4
- model: athirdpath/NSFW_DPO_vmgb-7b
parameters:
weight: 0.2
density: 0.4
merge_method: dare_ties
base_model: saishf/West-Hermes-7B
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jsfs11/HighdensityRPMerge-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-corrupted | sonthenguyen | 2024-02-04T16:06:17Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T14:58:25Z | ---
license: apache-2.0
---
Training hyperparameters LoRA: r=16 lora_alpha=16 lora_dropout=0.05 bias="none" task_type="CAUSAL_LM" target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
Training arguments: auto_find_batch_size=True gradient_checkpointing=True learning_rate=5e-7 lr_scheduler_type="cosine" max_steps=3922 optim="paged_adamw_32bit" warmup_steps=100
DPOTrainer: beta=0.1 max_prompt_length=1024 max_length=1536 |
ArianAskari/SOLID-SFT-DPO-MixQV2-SOLIDChosen-SFTRejected-Zephyr-7b-beta | ArianAskari | 2024-02-13T06:21:52Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:ArianAskari/SOLID",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-13T00:34:10Z | ---
library_name: transformers
tags: []
license: apache-2.0
language:
- en
datasets: ArianAskari/SOLID
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liminerity/ultra0 | liminerity | 2024-03-11T18:33:02Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"starsnatched/MemGPT-2B",
"liminerity/binarized-ingotrix-slerp-7b",
"conversational",
"base_model:starsnatched/MemGPT-2B",
"base_model:liminerity/binarized-ingotrix-slerp-7b",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T22:44:05Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- starsnatched/MemGPT-2B
- liminerity/binarized-ingotrix-slerp-7b
base_model:
- starsnatched/MemGPT-2B
- liminerity/binarized-ingotrix-slerp-7b
model-index:
- name: ultra0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 41.47
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/ultra0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 68.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/ultra0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 33.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/ultra0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.49
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/ultra0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/ultra0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 16.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/ultra0
name: Open LLM Leaderboard
---
# ultra0
ultra0 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [starsnatched/MemGPT-2B](https://huggingface.co/starsnatched/MemGPT-2B)
* [liminerity/binarized-ingotrix-slerp-7b](https://huggingface.co/liminerity/binarized-ingotrix-slerp-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- layer_range: [0, 24]
model: starsnatched/MemGPT-2B
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- layer_range: [0, 24]
model: liminerity/binarized-ingotrix-slerp-7b
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: dare_ties
base_model: starsnatched/MemGPT-2B
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/ultra0"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__ultra0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |44.32|
|AI2 Reasoning Challenge (25-Shot)|41.47|
|HellaSwag (10-Shot) |68.02|
|MMLU (5-Shot) |33.37|
|TruthfulQA (0-shot) |41.49|
|Winogrande (5-shot) |65.51|
|GSM8k (5-shot) |16.07|
|
Eric111/CatunaMayo | Eric111 | 2024-02-26T19:46:02Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Eric111/caTUNABeagle",
"Eric111/AlphaMayo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-15T20:59:55Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Eric111/caTUNABeagle
- Eric111/AlphaMayo
---
Acknowledgements: https://github.com/mlabonne/llm-course
# CatunaMayo
CatunaMayo is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Eric111/caTUNABeagle](https://huggingface.co/Eric111/caTUNABeagle)
* [Eric111/AlphaMayo](https://huggingface.co/Eric111/AlphaMayo)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Eric111/caTUNABeagle
layer_range: [0, 32]
- model: Eric111/AlphaMayo
layer_range: [0, 32]
merge_method: slerp
base_model: Eric111/caTUNABeagle
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
ChaoticNeutrals/Cookie_7B | ChaoticNeutrals | 2024-03-04T20:12:44Z | 726 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:jeiku/SpaghettiOs_7B",
"base_model:jeiku/Rainbow_69_7B",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T01:41:04Z | ---
license: other
library_name: transformers
tags:
- mergekit
- merge
base_model:
- jeiku/SpaghettiOs_7B
- jeiku/Rainbow_69_7B
model-index:
- name: Cookie_7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Cookie_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.57
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Cookie_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Cookie_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.88
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Cookie_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Cookie_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Cookie_7B
name: Open LLM Leaderboard
---
# Cookie
A reasonably logical model with a few datasets thrown in to increase RP abilities. This is a good candidate for a balanced 7B model that can provide assistant functionality alongside roleplaying or romantic endeavors.

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [jeiku/SpaghettiOs_7B](https://huggingface.co/jeiku/SpaghettiOs_7B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/Rainbow_69_7B](https://huggingface.co/jeiku/Rainbow_69_7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: jeiku/SpaghettiOs_7B
parameters:
normalize: true
models:
- model: jeiku/SpaghettiOs_7B
parameters:
weight: 1
- model: jeiku/Rainbow_69_7B
parameters:
weight: 1
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChaoticNeutrals__Cookie_7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.87|
|AI2 Reasoning Challenge (25-Shot)|69.71|
|HellaSwag (10-Shot) |87.57|
|MMLU (5-Shot) |64.51|
|TruthfulQA (0-shot) |66.88|
|Winogrande (5-shot) |81.37|
|GSM8k (5-shot) |61.18|
|
CorticalStack/mistral-7b-dolphin-sft | CorticalStack | 2024-02-17T14:26:38Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T14:24:36Z | ---
license: apache-2.0
---
# mistral-7b-dolphin-sft
mistral-7b-dolphin-sft is an SFT fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) using the [cognitivecomputations/dolphin](https://huggingface.co/datasets/cognitivecomputations/dolphin) dataset.
## Fine-tuning configuration
### LoRA
- r: 256
- LoRA alpha: 128
- LoRA dropout: 0.0
### Training arguments
- Epochs: 1
- Batch size: 4
- Gradient accumulation steps: 6
- Optimizer: adamw_torch_fused
- Max steps: 100
- Learning rate: 0.0002
- Weight decay: 0.1
- Learning rate scheduler type: linear
- Max seq length: 2048
- 4-bit bnb: True
Trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
CorticalStack/mistral-7b-alpaca-sft | CorticalStack | 2024-02-17T19:24:37Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T19:22:39Z | ---
license: apache-2.0
---
# mistral-7b-alpaca-sft
mistral-7b-alpaca-sft is an SFT fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) using the [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) dataset.
## Fine-tuning configuration
### LoRA
- r: 256
- LoRA alpha: 128
- LoRA dropout: 0.0
### Training arguments
- Epochs: 1
- Batch size: 4
- Gradient accumulation steps: 6
- Optimizer: adamw_torch_fused
- Max steps: 100
- Learning rate: 0.0002
- Weight decay: 0.1
- Learning rate scheduler type: linear
- Max seq length: 2048
- 4-bit bnb: True
Trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
lodrick-the-lafted/Hermes-Instruct-7B-100K | lodrick-the-lafted | 2024-03-04T12:24:50Z | 726 | 2 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"dataset:lodrick-the-lafted/Hermes-100K",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-20T03:04:10Z | ---
license: apache-2.0
datasets:
- lodrick-the-lafted/Hermes-100K
model-index:
- name: Hermes-Instruct-7B-100K
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-100K
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.84
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-100K
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-100K
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.62
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-100K
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-100K
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-100K
name: Open LLM Leaderboard
---
<img src=https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-100K/resolve/main/hermes-instruct.png>
# Hermes-Instruct-7B-v0.2
[Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) trained with 100K rows of [teknium/openhermes](https://huggingface.co/datasets/teknium/openhermes), in Alpaca format.
<br />
<br />
# Prompt Format
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
```
<s>[INST] {sys_prompt} {instruction} [/INST]
```
or
```
{sys_prompt}
### Instruction:
{instruction}
### Response:
```
The tokenizer default is Alpaca this time around.
<br />
<br />
# Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "lodrick-the-lafted/Hermes-Instruct-7B-100K"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Give me a cooking recipe for an apple pie."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Hermes-Instruct-7B-100K)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.96|
|AI2 Reasoning Challenge (25-Shot)|61.52|
|HellaSwag (10-Shot) |82.84|
|MMLU (5-Shot) |60.95|
|TruthfulQA (0-shot) |63.62|
|Winogrande (5-shot) |76.87|
|GSM8k (5-shot) |43.97|
|
splm/openchat-spin-slimorca-iter2 | splm | 2024-02-22T14:53:03Z | 726 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-22T14:46:58Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MSL7/INEX8-7B | MSL7 | 2024-03-11T18:31:44Z | 726 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/merge3",
"yam-peleg/Experiment26-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-02T22:44:44Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- liminerity/merge3
- yam-peleg/Experiment26-7B
model-index:
- name: INEX8-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MSL7/INEX8-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.19
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MSL7/INEX8-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.47
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MSL7/INEX8-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 77.83
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MSL7/INEX8-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MSL7/INEX8-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MSL7/INEX8-7B
name: Open LLM Leaderboard
---
MADE WITH LOVE BY LIMINERITY
# INEX8-7B
INEX8-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [liminerity/merge3](https://huggingface.co/liminerity/merge3)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
## 🧩 Configuration
```yaml
MODEL_NAME = "merge"
slices:
- sources:
- model: MSL7/INEX4-7b
layer_range: [0, 32]
- model: yam-peleg/Experiment24-7B
layer_range: [0, 32]
merge_method: slerp
base_model: MSL7/INEX4-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
MODEL_NAME = "merge1"
slices:
- sources:
- model: liminerity/merge
layer_range: [0, 32]
- model: CorticalStack/shadow-clown-7B-dare
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
MODEL_NAME = "merge2"
slices:
- sources:
- model: liminerity/merge1
layer_range: [0, 32]
- model: bardsai/jaskier-7b-dpo-v6.1
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
MODEL_NAME = "merge3"
slices:
- sources:
- model: liminerity/merge2
layer_range: [0, 32]
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
MODEL_NAME: "INEX8-7b"
slices:
- sources:
- model: liminerity/merge3
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MSL7__INEX8-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.44|
|AI2 Reasoning Challenge (25-Shot)|73.29|
|HellaSwag (10-Shot) |89.19|
|MMLU (5-Shot) |64.47|
|TruthfulQA (0-shot) |77.83|
|Winogrande (5-shot) |84.85|
|GSM8k (5-shot) |68.99|
|
YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1 | YeungNLP | 2024-03-03T08:13:50Z | 726 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2305.18290",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-03T03:49:34Z | ---
library_name: transformers
license: apache-2.0
basemodel: Qwen/Qwen1.5-7B
---
## Model Card for Firefly-Qwen1.5
[firefly-qwen1.5-en-7b](https://huggingface.co/YeungNLP/firefly-qwen1.5-en-7b) and [firefly-qwen1.5-en-7b-dpo-v0.1](https://huggingface.co/YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1) are trained based on [Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) to act as a helpful and harmless AI assistant.
We use [Firefly](https://github.com/yangjianxin1/Firefly) to train our models on **a single V100 GPU** with QLoRA.
firefly-qwen1.5-en-7b is fine-tuned based on Qwen1.5-7B with English instruction data, and firefly-qwen1.5-en-7b-dpo-v0.1 is trained with [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) based on firefly-qwen1.5-en-7b.
Our models outperform official [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat), [Gemma-7B-it](https://huggingface.co/google/gemma-7b-it), [Zephyr-7B-Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
<img src="pics/open_llm.png" width="800">
Although our models are trained with English data, you can also try to chat with models in Chinese because Qwen1.5 is also good at Chinese. But we have not evaluated
the performance in Chinese yet.
We advise you to install transformers>=4.37.0.
## Performance
We evaluate our models on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), they achieve good performance.
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|-----------------------------------|--------|--------|-----------|--------|------------|------------|--------|
| firefly-gemma-7b | 62.93 | 62.12 | 79.77 | 61.57 | 49.41 | 75.45 | 49.28 |
| **firefly-qwen1.5-en-7b-dpo-v0.1** | 62.36 | 54.35 | 76.04 | 61.21 | 56.4 | 72.06 | 54.13 |
| zephyr-7b-beta | 61.95 | 62.03 | 84.36 | 61.07 | 57.45 | 77.74 | 29.04 |
| **firefly-qwen1.5-en-7b** | 61.44 | 53.41 | 75.51 | 61.67 |51.96 |70.72 | 55.34 |
| vicuna-13b-v1.5 | 55.41 | 57.08 | 81.24 | 56.67 | 51.51 | 74.66 | 11.3 |
| Xwin-LM-13B-V0.1 | 55.29 | 62.54 | 82.8 | 56.53 | 45.96 | 74.27 | 9.63 |
| Qwen1.5-7B-Chat | 55.15 | 55.89 | 78.56 | 61.65 | 53.54 | 67.72 | 13.57 |
| gemma-7b-it | 53.56 | 51.45 | 71.96 | 53.52 | 47.29 | 67.96 | 29.19 |
## Usage
The chat templates of our chat models are the same as Official Qwen1.5-7B-Chat:
```text
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
hello, who are you?<|im_end|>
<|im_start|>assistant
I am a AI program developed by Firefly<|im_end|>
```
You can use script to inference in [Firefly](https://github.com/yangjianxin1/Firefly/blob/master/script/chat/chat.py).
You can also use the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name_or_path = "YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1"
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
trust_remote_code=True,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
device_map='auto',
)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
prompt = "Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions. "
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to('cuda')
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=1500,
top_p = 0.9,
temperature = 0.35,
repetition_penalty = 1.0,
eos_token_id=tokenizer.encode('<|im_end|>', add_special_tokens=False)
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Training Details
Both in SFT and DPO stages, **We only use a single V100 GPU** with QLoRA, and we use [Firefly](https://github.com/yangjianxin1/Firefly) to train our models.
### Training Setting
The following hyperparameters are used during SFT:
- num_epochs: 1
- learning_rate: 2e-4
- total_train_batch_size: 32
- max_seq_length: 2048
- optimizer: paged_adamw_32bit
- lr_scheduler_type: constant_with_warmup
- warmup_steps: 700
- lora_rank: 64
- lora_alpha: 16
- lora_dropout: 0.05
- gradient_checkpointing: true
- fp16: true
The following hyperparameters were used during DPO:
- num_epochs: 1
- learning_rate: 2e-4
- total_train_batch_size: 32
- max_seq_length: 1600
- max_prompt_length: 500
- optimizer: paged_adamw_32bit
- lr_scheduler_type: constant_with_warmup
- warmup_steps: 200
- lora_rank: 64
- lora_alpha: 16
- lora_dropout: 0.05
- gradient_checkpointing: true
- fp16: true
### Training metrics
Training Rewards/margins in DPO:
<img src="pics/margins.png" width="600">
Training Rewards/accuracies in DPO:
<img src="pics/accuracies.png" width="500">
Training loss in DPO:
<img src="pics/loss.png" width="500">
The table below shows the full set of DPO training metrics:
| Epoch | Step | Loss | Rewards/accuracies | Rewards/margins | Rewards/chosen | Rewards/rejected | Logits/chosen| Logits/rejected | Logps/chosen| Logps/rejected|
|---|---|---|---|---|---|---|---|---|---|---|
|0.05|100|0.6231|0.6587|0.3179|0.0404|-0.2774|1.1694|1.2377|-284.5586|-255.4863|
|0.1|200|0.5945|0.6894|0.5988|-0.1704|-0.7693|1.012|1.0283|-284.3049|-268.1887|
|0.16|300|0.5754|0.6981|0.8314|-0.282|-1.1133|0.8912|0.8956|-283.6926|-270.3117|
|0.21|400|0.5702|0.7194|0.9369|-0.1944|-1.1313|0.7255|0.7557|-291.2833|-273.9706|
|0.26|500|0.5913|0.695|0.8784|-0.4524|-1.3309|0.5491|0.5535|-289.5705|-271.754|
|0.31|600|0.5743|0.6994|1.0192|-0.4505|-1.4698|0.6446|0.6399|-296.5292|-277.824|
|0.37|700|0.5876|0.7219|1.0471|-0.6998|-1.747|0.4955|0.4329|-303.7684|-289.0117|
|0.42|800|0.5831|0.715|1.0485|-0.8185|-1.8671|0.5589|0.4804|-295.6313|-288.0656|
|0.47|900|0.5674|0.7119|1.1854|-1.2085|-2.3939|0.3467|0.2249|-302.3643|-286.2816|
|0.52|1000|0.5794|0.7138|1.1458|-0.8423|-1.9881|0.5116|0.4248|-299.3136|-287.3934|
|0.58|1100|0.5718|0.7194|1.2897|-1.4944|-2.7841|0.6392|0.5739|-316.6829|-294.1148|
|0.63|1200|0.5718|0.7275|1.2459|-1.7543|-3.0002|0.4999|0.4065|-316.7873|-297.8514|
|0.68|1300|0.5789|0.72|1.3379|-1.8485|-3.1864|0.4289|0.3172|-314.8326|-296.8319|
|0.73|1400|0.5462|0.7425|1.4074|-1.9865|-3.3939|0.3645|0.2333|-309.4503|-294.3931|
|0.79|1500|0.5829|0.7156|1.2582|-2.1183|-3.3766|0.4193|0.2796|-307.5281|-292.0817|
|0.84|1600|0.5575|0.7375|1.471|-2.1429|-3.6139|0.6547|0.5152|-310.9912|-298.899|
|0.89|1700|0.5638|0.745|1.5433|-2.991|-4.5343|0.7336|0.6782|-328.2657|-307.5182|
|0.94|1800|0.5559|0.7181|1.4484|-2.8818|-4.3302|0.7997|0.8327|-316.2716|-295.1836|
|0.99|1900|0.5627|0.7387|1.5378|-2.7941|-4.332|0.8573|0.858|-324.9405|-310.1192| |
0-hero/Matter-0.1-Slim-7B-A | 0-hero | 2024-04-03T06:02:46Z | 726 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:0-hero/Matter-0.1-Slim-A",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-13T15:56:35Z | ---
license: apache-2.0
datasets:
- 0-hero/Matter-0.1-Slim-A
language:
- en
---
## Matter 7B (Mistral 7B Finetune)
Matter 7B is finetuned on the [slim-A version of the Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1-Slim-A) (~285k rows), which is curated from over 35 datsets analyzing >6B tokens
### Training
It took ~15 hours to train 3 epochs on 4x A100s (80GB) using Axolotl for a full-finetune
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
CultriX/NeuralCeptrix-7B-SLERP | CultriX | 2024-03-19T20:46:24Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"CultriX/MergeCeption-7B-v3",
"CultriX/NeuralTrix-bf16",
"base_model:CultriX/MergeCeption-7B-v3",
"base_model:CultriX/NeuralTrix-bf16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-18T10:39:03Z | ---
tags:
- merge
- mergekit
- lazymergekit
- CultriX/MergeCeption-7B-v3
- CultriX/NeuralTrix-bf16
base_model:
- CultriX/MergeCeption-7B-v3
- CultriX/NeuralTrix-bf16
license: apache-2.0
---
# NeuralCeptrix-7B-SLERP
NeuralCeptrix-7B-SLERP is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [CultriX/MergeCeption-7B-v3](https://huggingface.co/CultriX/MergeCeption-7B-v3)
* [CultriX/NeuralTrix-bf16](https://huggingface.co/CultriX/NeuralTrix-bf16)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: CultriX/MergeCeption-7B-v3
layer_range: [0, 32]
- model: CultriX/NeuralTrix-bf16
layer_range: [0, 32]
merge_method: slerp
base_model: CultriX/MergeCeption-7B-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/NeuralCeptrix-7B-SLERP"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
jeiku/Eros_Prodigadigm_7B | jeiku | 2024-03-23T01:52:57Z | 726 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-23T01:38:13Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: other
---
# prodigadigm
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* erosprodigy
* erosparadigm
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: erosparadigm
layer_range: [0, 32]
- model: erosprodigy
layer_range: [0, 32]
merge_method: slerp
base_model: erosparadigm
parameters:
t:
- filter: self_attn
value: [0.5, 0.5, 0.5, 0.5, 0.5]
- filter: mlp
value: [0.5, 0.5, 0.5, 0.5, 0.5]
- value: 0.5
dtype: bfloat16
``` |
Locutusque/Hyperion-3.0-Mistral-7B-DPO | Locutusque | 2024-03-24T19:26:38Z | 726 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Locutusque/hyperion-dpo-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T02:46:33Z | ---
library_name: transformers
license: apache-2.0
language:
- en
widget:
- example_title: Reasoning
text: >
<|im_start|>system
You are Hyperion, an AI assistant developed by Sebastian Gabarain with a
focus in reasoning, coding, science, and other general-purpose subjects. You
are an AI assistant that follows instruction extremely well. Help as much as
you can.
<|im_end|>
<|im_start|>user
In this task, you are given a description of an object and its surroundings.
Your
task is to describe the motion of the object mentioned in the description
using
differential equations. The answer should be a mathematical expression in t.
Description:
A ball is thrown up at 30 ft/sec from 3 feet off the ground. Solve this
step-by-step,
explaining each step in detail.
<|im_end|>
<|im_start|>assistant
output:
text: >
Hyperion here. The task requires you to describe the motion of a ball
thrown up using differential equations. Here's the detailed explanation of
how to solve it:
1. Identify the given information:
- Initial velocity (v0): 30 ft/sec
- Initial height (y0): 3 feet off the ground
2. Determine the acceleration due to gravity (g) at sea level, which is
approximately -32.2 ft/s^2. Since we're dealing with a short distance and
not very high altitudes, we can consider this value of g as constant.
3. Use Newton's second law of motion to find the differential equation
that describes the motion:
- The net force acting on the ball is the sum of gravity (mg) and air resistance (which we will neglect for simplicity). So, Fnet = mg.
- Since acceleration equals force divided by mass, a = Fnet/m. In this case, m cancels out, so a = g.
- The motion equation becomes: dv/dt = g = -32.2 ft/s^2 (since we're moving upwards). This is the first differential equation.
4. Since we want to describe the ball's position (y) as well, we need
another differential equation. We can use kinematics equations:
- The velocity equation is v = v0 + at. Substitute the value of a from step 3:
dv/dt = -32.2 ft/s^2. Rewriting it as an integral:
∫dv = -32.2 ft/s^2 * dt
- Integrating with respect to time, we get v(t) = -16.1 t + C (constant of integration).
5. Now we have two equations, one for velocity and another for position:
- dv/dt = -32.2 ft/s^2 (equation 1)
- v(t) = -16.1 t + C (equation 2)
6. We need to find the constant C using the initial conditions given in
the problem. We know that initially, at time t=0, the ball's velocity is
v0:
- Equation 2 becomes v0 = -16.1 * 0 + C, which simplifies to C = v0 = 30 ft/sec (initial velocity).
7. Now we have both equations with the constants filled in:
- dv/dt = -32.2 ft/s^2 (equation 1)
- v(t) = -16.1 t + 30 ft/sec (equation 2)
8. Finally, we need to find the ball's position (y) as a function of time:
- Use the second kinematics equation y = y0 + v0*t + ½*a*t^2 (since acceleration is constant, it can be moved outside the integral).
- Substitute the values for v0 and a:
y(t) = 3 ft + (30 ft/sec) * t + (-½ * 32.2 ft/s^2) * t^2
That's it! The differential equation describing the motion of the ball is:
- dv/dt = -32.2 ft/s^2 (equation 1)
- v(t) = -16.1 t + 30 ft/sec (equation 2)
- y(t) = 3 ft + (30 ft/sec) * t - (16.1 t^2) ft (equation 3)
datasets:
- Locutusque/hyperion-dpo-v1.0
---
# Hyperion-3.0-Mistral-7B-DPO
## Model Details
- **Model Name**: Locutusque/Hyperion-3.0-Mistral-7B-DPO
- **Base Model**: mistralai/Mistral-7B-v0.1
- **Publisher**: Locutusque
- **Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning
- **Language**: Multi-domain, English language
- **License**: Apache-2.0
## Model Description
`Locutusque/Hyperion-3.0-Mistral-7B-DPO` is an advanced language model fine-tuned with a dataset of 20,000 meticulously curated high-quality preference pairs using Direct Preference Optimization (DPO). The examples were generated by GPT-4 to ensure exceptional quality and relevance. This model is designed to provide superior performance across a wide range of complex tasks, including question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.
## Intended Use
This model is intended for researchers, developers, and organizations seeking a highly capable and reliable language model for tackling challenging problems across various domains. Potential use cases include:
- Intelligent tutoring systems and educational applications in science, medicine, mathematics, and computer science
- Advanced conversational AI for technical support, customer service, and domain-specific chatbots
- Code generation and analysis tools for software development and programming assistance
- Medical text analysis and information retrieval for healthcare professionals and researchers
- Mathematical problem-solving and logical reasoning applications for academia and industry
## Training Data
The `Locutusque/Hyperion-3.0-Mistral-7B-DPO` model was fine-tuned on a carefully curated dataset of 20,000 preference pairs, where 4,000 examples were used to fine-tune. These examples were generated by GPT-4 to ensure the highest quality and relevance across various domains, including programming, medical texts, mathematical problems, and reasoning tasks. The training data was further optimized using Direct Preference Optimization (DPO) to align the model's outputs with human preferences and improve overall performance.
## Quants
ExLlamaV2: https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-exl2
GGUF: https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF
## Evaluation Results
mmlu flan cot 5-shot
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|-------------------------------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|mmlu_flan_cot_fewshot |N/A |get-answer| 0|exact_match|0.5833|± |0.0118|
| - mmlu_flan_cot_fewshot_humanities |N/A |get-answer| 0|exact_match|0.5039|± |0.0205|
| - mmlu_flan_cot_fewshot_formal_logic | 0|get-answer| 0|exact_match|0.2143|± |0.1138|
| - mmlu_flan_cot_fewshot_high_school_european_history | 0|get-answer| 0|exact_match|0.6667|± |0.1143|
| - mmlu_flan_cot_fewshot_high_school_us_history | 0|get-answer| 0|exact_match|0.7727|± |0.0914|
| - mmlu_flan_cot_fewshot_high_school_world_history | 0|get-answer| 0|exact_match|0.5385|± |0.0997|
| - mmlu_flan_cot_fewshot_international_law | 0|get-answer| 0|exact_match|0.9231|± |0.0769|
| - mmlu_flan_cot_fewshot_jurisprudence | 0|get-answer| 0|exact_match|0.5455|± |0.1575|
| - mmlu_flan_cot_fewshot_logical_fallacies | 0|get-answer| 0|exact_match|0.7778|± |0.1008|
| - mmlu_flan_cot_fewshot_moral_disputes | 0|get-answer| 0|exact_match|0.5526|± |0.0817|
| - mmlu_flan_cot_fewshot_moral_scenarios | 0|get-answer| 0|exact_match|0.4000|± |0.0492|
| - mmlu_flan_cot_fewshot_philosophy | 0|get-answer| 0|exact_match|0.7647|± |0.0738|
| - mmlu_flan_cot_fewshot_prehistory | 0|get-answer| 0|exact_match|0.6571|± |0.0814|
| - mmlu_flan_cot_fewshot_professional_law | 0|get-answer| 0|exact_match|0.3294|± |0.0362|
| - mmlu_flan_cot_fewshot_world_religions | 0|get-answer| 0|exact_match|0.8947|± |0.0723|
| - mmlu_flan_cot_fewshot_other |N/A |get-answer| 0|exact_match|0.6833|± |0.0244|
| - mmlu_flan_cot_fewshot_business_ethics | 0|get-answer| 0|exact_match|0.9091|± |0.0909|
| - mmlu_flan_cot_fewshot_clinical_knowledge | 0|get-answer| 0|exact_match|0.5862|± |0.0931|
| - mmlu_flan_cot_fewshot_college_medicine | 0|get-answer| 0|exact_match|0.6364|± |0.1050|
| - mmlu_flan_cot_fewshot_global_facts | 0|get-answer| 0|exact_match|0.6000|± |0.1633|
| - mmlu_flan_cot_fewshot_human_aging | 0|get-answer| 0|exact_match|0.6087|± |0.1041|
| - mmlu_flan_cot_fewshot_management | 0|get-answer| 0|exact_match|0.9091|± |0.0909|
| - mmlu_flan_cot_fewshot_marketing | 0|get-answer| 0|exact_match|0.8000|± |0.0816|
| - mmlu_flan_cot_fewshot_medical_genetics | 0|get-answer| 0|exact_match|1.0000|± |0.0000|
| - mmlu_flan_cot_fewshot_miscellaneous | 0|get-answer| 0|exact_match|0.8023|± |0.0432|
| - mmlu_flan_cot_fewshot_nutrition | 0|get-answer| 0|exact_match|0.6667|± |0.0833|
| - mmlu_flan_cot_fewshot_professional_accounting | 0|get-answer| 0|exact_match|0.4839|± |0.0912|
| - mmlu_flan_cot_fewshot_professional_medicine | 0|get-answer| 0|exact_match|0.5806|± |0.0901|
| - mmlu_flan_cot_fewshot_virology | 0|get-answer| 0|exact_match|0.3889|± |0.1182|
| - mmlu_flan_cot_fewshot_social_sciences |N/A |get-answer| 0|exact_match|0.7003|± |0.0239|
| - mmlu_flan_cot_fewshot_econometrics | 0|get-answer| 0|exact_match|0.4167|± |0.1486|
| - mmlu_flan_cot_fewshot_high_school_geography | 0|get-answer| 0|exact_match|0.9091|± |0.0627|
| - mmlu_flan_cot_fewshot_high_school_government_and_politics| 0|get-answer| 0|exact_match|0.8095|± |0.0878|
| - mmlu_flan_cot_fewshot_high_school_macroeconomics | 0|get-answer| 0|exact_match|0.6512|± |0.0735|
| - mmlu_flan_cot_fewshot_high_school_microeconomics | 0|get-answer| 0|exact_match|0.5769|± |0.0988|
| - mmlu_flan_cot_fewshot_high_school_psychology | 0|get-answer| 0|exact_match|0.9000|± |0.0391|
| - mmlu_flan_cot_fewshot_human_sexuality | 0|get-answer| 0|exact_match|0.6667|± |0.1421|
| - mmlu_flan_cot_fewshot_professional_psychology | 0|get-answer| 0|exact_match|0.6522|± |0.0578|
| - mmlu_flan_cot_fewshot_public_relations | 0|get-answer| 0|exact_match|0.5833|± |0.1486|
| - mmlu_flan_cot_fewshot_security_studies | 0|get-answer| 0|exact_match|0.4074|± |0.0964|
| - mmlu_flan_cot_fewshot_sociology | 0|get-answer| 0|exact_match|0.8182|± |0.0842|
| - mmlu_flan_cot_fewshot_us_foreign_policy | 0|get-answer| 0|exact_match|0.7273|± |0.1408|
| - mmlu_flan_cot_fewshot_stem |N/A |get-answer| 0|exact_match|0.4866|± |0.0262|
| - mmlu_flan_cot_fewshot_abstract_algebra | 0|get-answer| 0|exact_match|0.0909|± |0.0909|
| - mmlu_flan_cot_fewshot_anatomy | 0|get-answer| 0|exact_match|0.4286|± |0.1373|
| - mmlu_flan_cot_fewshot_astronomy | 0|get-answer| 0|exact_match|0.5625|± |0.1281|
| - mmlu_flan_cot_fewshot_college_biology | 0|get-answer| 0|exact_match|0.5000|± |0.1291|
| - mmlu_flan_cot_fewshot_college_chemistry | 0|get-answer| 0|exact_match|0.5000|± |0.1890|
| - mmlu_flan_cot_fewshot_college_computer_science | 0|get-answer| 0|exact_match|0.2727|± |0.1408|
| - mmlu_flan_cot_fewshot_college_mathematics | 0|get-answer| 0|exact_match|0.3636|± |0.1521|
| - mmlu_flan_cot_fewshot_college_physics | 0|get-answer| 0|exact_match|0.3636|± |0.1521|
| - mmlu_flan_cot_fewshot_computer_security | 0|get-answer| 0|exact_match|0.7273|± |0.1408|
| - mmlu_flan_cot_fewshot_conceptual_physics | 0|get-answer| 0|exact_match|0.6538|± |0.0951|
| - mmlu_flan_cot_fewshot_electrical_engineering | 0|get-answer| 0|exact_match|0.7500|± |0.1118|
| - mmlu_flan_cot_fewshot_elementary_mathematics | 0|get-answer| 0|exact_match|0.7317|± |0.0701|
| - mmlu_flan_cot_fewshot_high_school_biology | 0|get-answer| 0|exact_match|0.5938|± |0.0882|
| - mmlu_flan_cot_fewshot_high_school_chemistry | 0|get-answer| 0|exact_match|0.3636|± |0.1050|
| - mmlu_flan_cot_fewshot_high_school_computer_science | 0|get-answer| 0|exact_match|0.5556|± |0.1757|
| - mmlu_flan_cot_fewshot_high_school_mathematics | 0|get-answer| 0|exact_match|0.3103|± |0.0874|
| - mmlu_flan_cot_fewshot_high_school_physics | 0|get-answer| 0|exact_match|0.2353|± |0.1060|
| - mmlu_flan_cot_fewshot_high_school_statistics | 0|get-answer| 0|exact_match|0.3043|± |0.0981|
| - mmlu_flan_cot_fewshot_machine_learning | 0|get-answer| 0|exact_match|0.4545|± |0.1575|
| Groups |Version| Filter |n-shot| Metric |Value | |Stderr|
|----------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|mmlu_flan_cot_fewshot |N/A |get-answer| 0|exact_match|0.5833|± |0.0118|
| - mmlu_flan_cot_fewshot_humanities |N/A |get-answer| 0|exact_match|0.5039|± |0.0205|
| - mmlu_flan_cot_fewshot_other |N/A |get-answer| 0|exact_match|0.6833|± |0.0244|
| - mmlu_flan_cot_fewshot_social_sciences|N/A |get-answer| 0|exact_match|0.7003|± |0.0239|
| - mmlu_flan_cot_fewshot_stem |N/A |get-answer| 0|exact_match|0.4866|± |0.0262|
## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Locutusque/Hyperion-3.0-Mistral-7B-DPO"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# For a text generation task
input_text = "<|im_start|>user\nExplain the implications of quantum entanglement in layman's terms.<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate a response
outputs = model.generate(input_ids, max_length=200, do_sample=True, top_p=0.7, top_k=6) # These are the recommended sample settings.
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Known Limitations
While the training data has been carefully curated and optimized, there may still be some inconsistencies or biases present due to the inherent complexity and diversity of the source dataset. Users should be aware of potential limitations and carefully evaluate the model's outputs for their specific use case.
Additionally, this model is highly compliant and will attempt to respond to most requests. For enterprise-level deployment, it is strongly recommended to further fine-tune the model using DPO to align its behavior with specific requirements and constraints.
## Licensing Information
This model is released under the Apache-2.0 license. |
wolfeidau/NeuralHermes-2.5-Mistral-7B | wolfeidau | 2024-03-24T08:01:32Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dpo",
"finetune",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T05:43:33Z | ---
library_name: transformers
tags:
- dpo
- mistral
- finetune
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
language:
- en
---
# NeuralHermes 2.5 - Mistral 7B
NeuralHermes-2.5 was created by fine-tuning OpenHermes-2.5 using a RLHF-like technique: Direct Preference Optimization (DPO) using the `Intel/orca_dpo_pairs` dataset.
## Usage
You can also run this model using the following code:
```python
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=new_model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
```
## Training hyperparameters
**LoRA**:
* r=16
* lora_alpha=16
* lora_dropout=0.05
* bias="none"
* task_type="CAUSAL_LM"
* target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
**Training arguments**:
* per_device_train_batch_size=4
* gradient_accumulation_steps=4
* gradient_checkpointing=True
* learning_rate=5e-5
* lr_scheduler_type="cosine"
* max_steps=200
* optim="paged_adamw_32bit"
* warmup_steps=100
**DPOTrainer**:
* beta=0.1
* max_prompt_length=1024
* max_length=1536 |
M4-ai/Hercules-Qwen1.5-14B | M4-ai | 2024-03-30T23:59:28Z | 726 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:Locutusque/hercules-v4.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-30T04:20:40Z | ---
library_name: transformers
license: other
datasets:
- Locutusque/hercules-v4.0
language:
- en
---
<style>
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
line-height: 1.6;
color: #f5f5f5;
background-color: #1e2a36;
margin: 0;
padding: 0;
}
.container {
max-width: 1200px;
margin: 20px auto;
padding: 20px;
background-color: #2a3f54;
border-radius: 8px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
h1 {
font-size: 2.5rem;
color: #51a3d3;
text-align: center;
margin-bottom: 30px;
width: 100%;
}
h2 {
font-size: 1.75rem;
margin: 20px 0;
color: #63b8ea;
padding-bottom: 10px;
}
h3 {
font-size: 1.25rem;
color: #80c8f4;
}
p, a {
font-size: 1rem;
}
p {
color: #b0c2ce;
margin-bottom: 20px;
}
ul {
list-style-type: none;
padding: 0;
display: flex;
flex-wrap: wrap;
justify-content: space-between;
width: 100%;
}
li {
background-color: #34495e;
padding: 20px;
margin-bottom: 10px;
border-radius: 4px;
cursor: pointer;
transition: background-color 0.3s ease, color 0.3s ease;
overflow: hidden;
color: #b0c2ce;
width: calc(50% - 10px);
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}
li:hover {
background-color: #4e6a81;
color: #dfe8f1;
}
.section-content {
margin-top: 15px;
border-top: 1px solid #4e6a81;
padding-top: 10px;
}
a {
color: #a4c8e1;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
pre {
background-color: #2c3e50;
padding: 10px;
border-radius: 5px;
overflow-x: auto;
color: #b0c2ce;
}
</style>
<div class="container">
<h1>Hercules-Qwen1.5-14B</h1>
</div>
<ul>
<li>
<h2>Model Details</h2>
<div class="section-content">
<h3>Model Description</h3>
<p>This model has capabilities in math, coding, function calling, roleplay, and more. We fine-tuned it using 700,000 examples of Hercules-v4.</p>
<p><strong>Developed by:</strong> M4-ai</p>
<p><strong>Language(s) (NLP):</strong> English and maybe Chinese</p>
<p><strong>License:</strong> tongyi-qianwen license</p>
<p><strong>Finetuned from model:</strong> <a href="https://huggingface.co/Qwen/Qwen1.5-14B">Qwen1.5-14B</a></p>
</div>
</li>
<li>
<h2>Uses</h2>
<div class="section-content">
<p>General purpose assistant, question answering, chain-of-thought, etc..</p>
<h3>Recommendations</h3>
<p>Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.</p>
</div>
</li>
<li>
<h2>Evaluation</h2>
<div class="section-content">
<p>Coming soon</p>
</div>
</li>
<li>
<h2>Training Details</h2>
<div class="section-content">
<h3>Training Data</h3>
<p><a href="https://huggingface.co/datasets/Locutusque/hercules-v4.0">https://huggingface.co/datasets/Locutusque/hercules-v4.0</a></p>
<h4>Training Hyperparameters</h4>
<p><strong>Training regime:</strong> bf16 non-mixed precision</p>
</div>
</li>
<li>
<h2>Technical Specifications</h2>
<div class="section-content">
<h4>Hardware</h4>
<p>We used 8 Kaggle TPUs, and we trained at a global batch size of 128 and sequence length of 1024</p>
</div>
</li>
<li>
<h2>Contributions</h2>
<div class="section-content">
<p>Thanks to @Tonic, @aloobun, @fhai50032, and @Locutusque for their contributions to this model.</p>
</div>
</li>
</ul> |
Kukedlc/NeuralKuke-4-All-7b | Kukedlc | 2024-03-30T06:14:26Z | 726 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/Neural-4-ARC-7b",
"Kukedlc/Neural-4-Wino-7b",
"Kukedlc/NeuralSirKrishna-7b",
"Kukedlc/Neural-4-QA-7b",
"Kukedlc/Neural-4-Maths-7b",
"base_model:Kukedlc/Neural-4-ARC-7b",
"base_model:Kukedlc/Neural-4-Wino-7b",
"base_model:Kukedlc/NeuralSirKrishna-7b",
"base_model:Kukedlc/Neural-4-QA-7b",
"base_model:Kukedlc/Neural-4-Maths-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-30T06:06:54Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/Neural-4-ARC-7b
- Kukedlc/Neural-4-Wino-7b
- Kukedlc/NeuralSirKrishna-7b
- Kukedlc/Neural-4-QA-7b
- Kukedlc/Neural-4-Maths-7b
base_model:
- Kukedlc/Neural-4-ARC-7b
- Kukedlc/Neural-4-Wino-7b
- Kukedlc/NeuralSirKrishna-7b
- Kukedlc/Neural-4-QA-7b
- Kukedlc/Neural-4-Maths-7b
license: apache-2.0
---
# NeuralKuke-4-All-7b
NeuralKuke-4-All-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/Neural-4-ARC-7b](https://huggingface.co/Kukedlc/Neural-4-ARC-7b)
* [Kukedlc/Neural-4-Wino-7b](https://huggingface.co/Kukedlc/Neural-4-Wino-7b)
* [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b)
* [Kukedlc/Neural-4-QA-7b](https://huggingface.co/Kukedlc/Neural-4-QA-7b)
* [Kukedlc/Neural-4-Maths-7b](https://huggingface.co/Kukedlc/Neural-4-Maths-7b)
## 🧩 Configuration
```yaml
models:
- model: Kukedlc/NeuralSirKrishna-7b
# No parameters necessary for base model
- model: Kukedlc/Neural-4-ARC-7b
parameters:
density: 0.55
weight: 0.2
- model: Kukedlc/Neural-4-Wino-7b
parameters:
density: 0.55
weight: 0.2
- model: Kukedlc/NeuralSirKrishna-7b
parameters:
density: 0.55
weight: 0.2
- model: Kukedlc/Neural-4-QA-7b
parameters:
density: 0.55
weight: 0.2
- model: Kukedlc/Neural-4-Maths-7b
parameters:
density: 0.55
weight: 0.2
merge_method: dare_ties
base_model: Kukedlc/NeuralSirKrishna-7b
parameters:
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralKuke-4-All-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
optimum/mistral-1.1b-testing | optimum | 2024-04-19T12:42:59Z | 726 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-19T12:41:27Z | ---
license: apache-2.0
---
mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.
it's based on the 3t base model (not chat tuned).
not extensively tested.
enjoy!
(model card is repeated due to open llm leaderboard length requirements)
mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.
it's based on the 3t base model (not chat tuned).
not extensively tested.
enjoy!
mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.
it's based on the 3t base model (not chat tuned).
not extensively tested.
enjoy!
mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.
it's based on the 3t base model (not chat tuned).
not extensively tested.
enjoy!
mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.
it's based on the 3t base model (not chat tuned).
not extensively tested.
enjoy!
mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.
it's based on the 3t base model (not chat tuned).
not extensively tested.
enjoy!
mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.
it's based on the 3t base model (not chat tuned).
not extensively tested.
enjoy!
mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.
it's based on the 3t base model (not chat tuned).
not extensively tested.
enjoy!
mistralized tinyllama since flash attention training on llama w/ flash-attn is buggy.
it's based on the 3t base model (not chat tuned).
not extensively tested.
enjoy! |
THU-KEG/ADELIE-SFT | THU-KEG | 2024-05-10T05:13:17Z | 726 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"Information Extraction",
"IE",
"Named Entity Recogniton",
"Event Extraction",
"Relation Extraction",
"LLaMA",
"en",
"dataset:ACE05",
"dataset:conll2003",
"dataset:conll2012_ontonotesv5",
"dataset:rams",
"dataset:tacred",
"dataset:fewrel",
"dataset:maven",
"arxiv:2405.05008",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-10T03:51:12Z | ---
license: llama2
datasets:
- ACE05
- conll2003
- conll2012_ontonotesv5
- rams
- tacred
- fewrel
- maven
language:
- en
metrics:
- f1
pipeline_tag: text-generation
tags:
- text-generation-inference
- Information Extraction
- IE
- Named Entity Recogniton
- Event Extraction
- Relation Extraction
- LLaMA
---
# Model Card for ADELIE-SFT
<!-- Provide a quick summary of what the model is/does. -->
<p align="justify">
We introduce <b>ADELIE</b> (<b>A</b>ligning large language mo<b>DEL</b>s on <b>I</b>nformation <b>E</b>xtraction), an aligned LLM that effectively solves various IE tasks, including closed IE, open IE, and on-demand IE. We first collect and construct a high-quality alignment corpus <font face="Verdana">IEInstruct</font> for IE. Then we train ADELIE<sub>SFT</sub> using instruction tuning on <font face="Verdana">IEInstruct</font>. We further train ADELIE<sub>SFT</sub> with direct preference optimization (DPO) objective, resulting in ADELIE<sub>DPO</sub>. Extensive experiments on various held-out IE datasets demonstrate that our models (ADELIE<sub>SFT</sub> and ADELIE<sub>DPO</sub>) achieve state-of-the-art (SoTA) performance among open-source models. We further explore the general capabilities of ADELIE, and experimental results reveal that their general capabilities do not exhibit a noticeable decline.
- 📖 Paper: [ADELIE: Aligning Large Language Models on Information Extraction](https://arxiv.org/abs/2405.05008)
</p>
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Yunjia Qi, Hao Peng, Xiaozhi Wang, Bin Xu, Lei Hou, Juanzi Li
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **License:** LLaMA2 License for the base model.
- **Finetuned from model [optional]:** LLaMA2-7B
|
duyntnet/TenyxChat-7B-v1-imatrix-GGUF | duyntnet | 2024-05-21T15:12:41Z | 726 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"TenyxChat-7B-v1",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | 2024-05-21T12:26:40Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- TenyxChat-7B-v1
---
Quantizations of https://huggingface.co/tenyx/TenyxChat-7B-v1
# From original readme
## Usage
Our model uses a simple chat template based on OpenChat 3.5. The chat template usage with a Hugging face generation example is shown below.
### Chat Template (Jinja)
```rust
{{ bos_token }}
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ 'User:' + message['content'] + eos_token }}
{% elif message['role'] == 'system' %}
{{ 'System:' + message['content'] + eos_token }}
{% elif message['role'] == 'assistant' %}
{{ 'Assistant:' + message['content'] + eos_token }}
{% endif %}
{% if loop.last and add_generation_prompt %}\n{{ 'Assistant:' }}{% endif %}\n{% endfor %}
```
### Hugging face Example
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="tenyx/TenyxChat-7B-v1", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
```
### Output
```
<s> System:You are a friendly chatbot who always responds in the style of a pirate.<|end_of_turn|>
User:Hi. I would like to make a hotel booking.<|end_of_turn|>
Assistant: Ahoy there me hearty! Arr, ye be lookin' fer a place to rest yer weary bones, eh?
Well then, let's set sail on this grand adventure and find ye a swell place to stay!
To begin, tell me the location ye be seekin' and the dates ye be lookin' to set sail.
And don't ye worry, me matey, I'll be sure to find ye a place that'll make ye feel like a king or queen on land!
``` |
CHE-72/Qwen1.5-4B-Chat-Q8_0-GGUF | CHE-72 | 2024-06-22T18:46:56Z | 726 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen1.5-4B-Chat",
"license:other",
"region:us"
] | text-generation | 2024-06-22T18:46:34Z | ---
base_model: Qwen/Qwen1.5-4B-Chat
language:
- en
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72/Qwen1.5-4B-Chat-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen1.5-4B-Chat`](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q8_0-GGUF --hf-file qwen1.5-4b-chat-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q8_0-GGUF --hf-file qwen1.5-4b-chat-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q8_0-GGUF --hf-file qwen1.5-4b-chat-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q8_0-GGUF --hf-file qwen1.5-4b-chat-q8_0.gguf -c 2048
```
|
huggingartists/elton-john | huggingartists | 2022-06-06T10:32:19Z | 725 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/elton-john",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/elton-john
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/ec76d346c4c8b057169194c1781021fd.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elton John</div>
<a href="https://genius.com/artists/elton-john">
<div style="text-align: center; font-size: 14px;">@elton-john</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Elton John.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/elton-john).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/elton-john")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/188xpm2n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Elton John's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1rgstntu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1rgstntu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/elton-john')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/elton-john")
model = AutoModelWithLMHead.from_pretrained("huggingartists/elton-john")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
taufeeque/tiny-gpt2 | taufeeque | 2023-01-28T01:15:12Z | 725 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:wikitext",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-01-25T07:18:20Z | ---
tags:
- generated_from_trainer
datasets:
- wikitext
metrics:
- accuracy
model-index:
- name: output_tiny
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: wikitext wikitext-103-v1
type: wikitext
args: wikitext-103-v1
metrics:
- name: Accuracy
type: accuracy
value: 0.2132901596611274
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_tiny
This model is a fine-tuned version of [gpt2_tiny_random](https://huggingface.co/gpt2_tiny_random) on the wikitext wikitext-103-v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3359
- Accuracy: 0.2133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 50000
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
DKYoon/mt5-large-lm-adapt | DKYoon | 2023-04-13T19:34:37Z | 725 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2205.12647",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-04-13T18:49:49Z | ---
license: apache-2.0
---
🤗 Language model initialized from mT5 and trained for an additional 100K steps on the Prefix LM objective using mC4 data.
Paper: [Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation](https://arxiv.org/abs/2205.12647)
Authors: Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, Noah Constant
PyTorch port of the original Flax checkpoint at [Google/T5X repository](https://github.com/google-research/t5x). |
MLRush/chinese-lm-30m | MLRush | 2023-04-26T00:33:54Z | 725 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"code",
"zh",
"dataset:wikipedia",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-04-22T08:06:57Z | ---
license: openrail
datasets:
- wikipedia
language:
- zh
metrics:
- perplexity
library_name: transformers
pipeline_tag: text-generation
tags:
- code
---
28M个参数:
vocab_size=12829
num_hidden_layers=8
num_attention_heads=8
intermediate_size=1024
max_position_embeddings=512
hidden_size=512
block_size=512 |
Geonmo/CLIP-Giga-config-fixed | Geonmo | 2023-06-28T02:19:09Z | 725 | 0 | transformers | [
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | 2023-06-28T01:18:15Z | ---
license: mit
---
It will be removed when the config in `laion/CLIP-ViT-bigG-14-laion2B-39B-b160k` is fixed.
|
nisten/obsidian-3b-multimodal-q6-gguf | nisten | 2023-12-09T17:57:24Z | 725 | 65 | null | [
"gguf",
"license:mit",
"region:us"
] | null | 2023-10-29T04:28:10Z | ---
license: mit
---
# Original model: [NousResearch - Obsidian-3B-V0.5](https://huggingface.co/NousResearch/Obsidian-3B-V0.5)
## gguf q6 quantised version by Nisten
To run the server inside /llama.cpp/ folder IN YOUR TERMINAL
## ./server -m obsidian-q6.gguf --mmproj mmproj-obsidian-f16.gguf -ngl 42
that's it, it's literally one command, you open your browser now at http://127.0.0.1:8080
## FIRST TIME TO RUN mac or linux, make a new folder, COPY PASTE THIS TO DL & RUN EVERYTHIN whole in ONE SHOT
```bash
git clone -b stablelm-support https://github.com/Galunid/llama.cpp.git && \
cd llama.cpp && \
make && \
wget https://huggingface.co/nisten/obsidian-3b-multimodal-q6-gguf/resolve/main/obsidian-q6.gguf && \
wget https://huggingface.co/nisten/obsidian-3b-multimodal-q6-gguf/resolve/main/mmproj-obsidian-f16.gguf && \
./server -m obsidian-q6.gguf --mmproj mmproj-obsidian-f16.gguf -ngl 42
|
sethuiyer/SynthIQ-7b | sethuiyer | 2024-03-07T07:12:28Z | 725 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"en",
"dataset:stingning/ultrachat",
"dataset:garage-bAInd/Open-Platypus",
"dataset:Open-Orca/OpenOrca",
"dataset:TIGER-Lab/MathInstruct",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:teknium/openhermes",
"dataset:meta-math/MetaMathQA",
"dataset:Open-Orca/SlimOrca",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp",
"base_model:ehartford/dolphin-2.1-mistral-7b",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:bhenrym14/mistral-7b-platypus-fp16",
"base_model:ehartford/samantha-1.2-mistral-7b",
"base_model:iteknium/CollectiveCognition-v1.1-Mistral-7B",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-29T09:38:34Z | ---
language:
- en
license: llama2
library_name: transformers
tags:
- mistral
- merge
datasets:
- stingning/ultrachat
- garage-bAInd/Open-Platypus
- Open-Orca/OpenOrca
- TIGER-Lab/MathInstruct
- OpenAssistant/oasst_top1_2023-08-25
- teknium/openhermes
- meta-math/MetaMathQA
- Open-Orca/SlimOrca
pipeline_tag: text-generation
base_model:
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
- ehartford/dolphin-2.1-mistral-7b
- Open-Orca/Mistral-7B-OpenOrca
- bhenrym14/mistral-7b-platypus-fp16
- ehartford/samantha-1.2-mistral-7b
- iteknium/CollectiveCognition-v1.1-Mistral-7B
- HuggingFaceH4/zephyr-7b-alpha
model-index:
- name: sethuiyer/SynthIQ-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/SynthIQ-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/SynthIQ-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/SynthIQ-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/SynthIQ-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/SynthIQ-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/SynthIQ-7b
name: Open LLM Leaderboard
---
<p align="center">
<img src="https://codeberg.org/aninokuma/DeydooAssistant/raw/branch/main/logo.webp" height="256px" alt="SynthIQ">
</p>
# SynthIQ
This is SynthIQ, rated **92.23/100** by GPT-4 across varied complex prompts. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
| Benchmark Name | Score |
| ---------------- | ----- |
| ARC | 65.87 |
| HellaSwag | 85.82 |
| MMLU | 64.75 |
| TruthfulQA | 57.00 |
| Winogrande | 78.69 |
| GSM8K | 64.06 |
| AGIEval | 42.67 |
| GPT4All | 73.71 |
| Bigbench | 44.59 |
## Update - 19/01/2024
Tested to work well with autogen and CrewAI
GGUF Files
[Q4_K_M](https://huggingface.co/sethuiyer/SynthIQ_GGUF/blob/main/synthiq.Q4_K_M.gguf) - medium, balanced quality - recommended
[Q_6_K](https://huggingface.co/sethuiyer/SynthIQ_GGUF/blob/main/synthiq.Q6_K.gguf) - very large, extremely low quality loss
[Q8_0](https://huggingface.co/sethuiyer/SynthIQ_GGUF/blob/main/synthiq.Q8.gguf) - very large, extremely low quality loss - not recommended
**Important Update**: SynthIQ is now available on Ollama. You can use it by running the command ```ollama run stuehieyr/synthiq``` in your
terminal. If you have limited computing resources, check out this [video](https://www.youtube.com/watch?v=Qa1h7ygwQq8) to learn how to run it on
a Google Colab backend.
# Yaml Config
```yaml
slices:
- sources:
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
layer_range: [0, 32]
- model: uukuguy/speechless-mistral-six-in-one-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16
```
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
License is LLama2 license as uukuguy/speechless-mistral-six-in-one-7b is llama2 license.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__SynthIQ-7b)
# [Nous Benchmark Evalation Results](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard)
Detailed results can be found [here](https://gist.github.com/sethuiyer/f47dee388a4e95d46181c98d37d66a58)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__SynthIQ-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.37|
|AI2 Reasoning Challenge (25-Shot)|65.87|
|HellaSwag (10-Shot) |85.82|
|MMLU (5-Shot) |64.75|
|TruthfulQA (0-shot) |57.00|
|Winogrande (5-shot) |78.69|
|GSM8k (5-shot) |64.06|
|
EmbeddedLLM/Mistral-7B-Merge-14-v0.3-ft-step-15936 | EmbeddedLLM | 2024-01-20T16:07:06Z | 725 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"merge",
"fine-tuned",
"en",
"dataset:cognitivecomputations/dolphin",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:teknium/openhermes",
"dataset:migtissera/Synthia-v1.3",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:ehartford/dolphin-2.2.1-mistral-7b",
"base_model:SciPhi/SciPhi-Mistral-7B-32k",
"base_model:ehartford/samantha-1.2-mistral-7b",
"base_model:Arc53/docsgpt-7b-mistral",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:meta-math/MetaMath-Mistral-7B",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:openchat/openchat-3.5-1210",
"base_model:beowolx/MistralHermes-CodePro-7B-v1",
"base_model:TIGER-Lab/MAmmoTH-7B-Mistral",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-05T04:25:57Z | ---
license: apache-2.0
datasets:
- cognitivecomputations/dolphin
- cognitivecomputations/dolphin-coder
- ise-uiuc/Magicoder-OSS-Instruct-75K
- teknium/openhermes
- migtissera/Synthia-v1.3
language:
- en
tags:
- merge
- fine-tuned
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- ehartford/dolphin-2.2.1-mistral-7b
- SciPhi/SciPhi-Mistral-7B-32k
- ehartford/samantha-1.2-mistral-7b
- Arc53/docsgpt-7b-mistral
- HuggingFaceH4/zephyr-7b-beta
- meta-math/MetaMath-Mistral-7B
- Open-Orca/Mistral-7B-OpenOrca
- openchat/openchat-3.5-1210
- beowolx/MistralHermes-CodePro-7B-v1
- TIGER-Lab/MAmmoTH-7B-Mistral
- teknium/OpenHermes-2.5-Mistral-7B
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# Model Description
This is fine-tuned model based on EmbeddedLLM/Mistral-7B-Merge-14-v0.3 for 3 epochs.
The dataset used are:
* dophin
* dolphin-coder
* Magicoder-OSS-Instruct-75K
* openhermes
* Synthia-v1.3
## Chat Template
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Training
The model has been fine-tuned for 3 epochs on 4 A100s using axolotl.
# Shout-Out to OSS
Thank you to the Open Source AI community for bringing together marvelous code frameworks and datasets. |
brucethemoose/SUS-Bagel-200K-DARE-Test | brucethemoose | 2024-01-11T19:17:29Z | 725 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"Yi",
"en",
"arxiv:2311.03099",
"arxiv:2306.01708",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-06T09:17:17Z | ---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
library_name: transformers
base_model: []
tags:
- mergekit
- merge
- Yi
---
Just an experiment to try and extend the context of SUS, a 4K Yi model, and DPO Bagel, which breaks down after 4K context. Yi 4K was used as a base (even for bagel which is technically a Yi 200K model), and Yi 200K is merged in with a density of 1.
I wanted to include Hermes 34B, but something funky about its tokenizer breaks mergekit.
A component of another merge. Auto generated mergekit description below:
***
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama as a base.
### Models Merged
The following models were included in the merge:
* /home/alpha/Models/Raw/SUSTech_SUS-Chat-34B
* /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
* /home/alpha/Models/Raw/jondurbin_bagel-34b-v0.2
* /home/alpha/Models/Raw/jondurbin_bagel-dpo-34b-v0.2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama
# No parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
weight: 0.5
density: 1
- model: /home/alpha/Models/Raw/SUSTech_SUS-Chat-34B
parameters:
weight: 0.2
density: 0.12
- model: /home/alpha/Models/Raw/jondurbin_bagel-dpo-34b-v0.2
parameters:
weight: 0.2
density: 0.15
- model: /home/alpha/Models/Raw/jondurbin_bagel-34b-v0.2
parameters:
weight: 0.1
density: 0.12
merge_method: dare_ties
tokenizer_source: union
base_model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama
parameters:
int8_mask: true
dtype: bfloat16
```
|
damerajee/Oot-v2_lll | damerajee | 2024-03-05T15:07:37Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/Marcoro14-7B-slerp",
"Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"conversational",
"base_model:mlabonne/Marcoro14-7B-slerp",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-11T14:53:20Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/Marcoro14-7B-slerp
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
pipeline_tag: conversational
base_model:
- mlabonne/Marcoro14-7B-slerp
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
model-index:
- name: Oot-v2_lll
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.28
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=damerajee/Oot-v2_lll
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.6
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=damerajee/Oot-v2_lll
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=damerajee/Oot-v2_lll
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=damerajee/Oot-v2_lll
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=damerajee/Oot-v2_lll
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=damerajee/Oot-v2_lll
name: Open LLM Leaderboard
---
# Oot-v2_lll
Oot-v2_lll is a merge of the following models using Mergekit:
* [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp)
* [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/Marcoro14-7B-slerp
layer_range: [0, 32]
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/Marcoro14-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "damerajee/Oot-v2_lll"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_damerajee__Oot-v2_lll)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.73|
|AI2 Reasoning Challenge (25-Shot)|69.28|
|HellaSwag (10-Shot) |86.60|
|MMLU (5-Shot) |64.96|
|TruthfulQA (0-shot) |62.57|
|Winogrande (5-shot) |80.82|
|GSM8k (5-shot) |72.18|
|
Vasanth/Beast-Soul | Vasanth | 2024-01-22T00:02:12Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"base_model:udkai/Turdus",
"base_model:decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-21T23:51:49Z | ---
tags:
- merge
- mergekit
- lazymergekit
base_model:
- udkai/Turdus
- decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
license: apache-2.0
---
# Beast-Soul
Beast-Soul is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
* [decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP](https://huggingface.co/decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: udkai/Turdus
layer_range: [0, 32]
- model: decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Vasanth/Beast-Soul"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
llmixer/BigWeave-v12-90b | llmixer | 2024-03-05T06:11:08Z | 725 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Xwin",
"Euryale 1.3",
"Platypus2",
"WinterGoddess",
"frankenmerge",
"dare",
"ties",
"90b",
"conversational",
"en",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T10:03:44Z | ---
language:
- en
license: llama2
tags:
- Xwin
- Euryale 1.3
- Platypus2
- WinterGoddess
- frankenmerge
- dare
- ties
- 90b
pipeline_tag: conversational
model-index:
- name: BigWeave-v12-90b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v12-90b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v12-90b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v12-90b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.35
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v12-90b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v12-90b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v12-90b
name: Open LLM Leaderboard
---
# BigWeave v12 90B
<img src="https://cdn-uploads.huggingface.co/production/uploads/65a6db055c58475cf9e6def1/4CbbAN-X7ZWj702JrcCGH.png" width=600>
The BigWeave models aim to identify merge settings equaling or surpassing the performance of Goliath-120b. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.
This version is a DARE-TIES merge of two passthrough merges: Xwin-LM-70b-v0.1 + Euryale-1.3-70b ([BigWeave v6](https://huggingface.co/llmixer/BigWeave-v6-90b)) and Platypus2-70b-instruct + WinterGoddess-1.4x-70b (BigWeave v8). Both models individually show strong performance, and the merged model achieves even lower perplexity than each model separately.
The 90b size allows for 4bit quants to fit into 48GB of VRAM.
# Prompting Format
Vicuna and Alpaca.
# Merge process
The models used in the merge are [Xwin-LM-70b-v0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), [Euryale-1.3-70b](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [Platypus2-70b-instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct) and [WinterGoddess-1.4x-70b](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2).
Merge configuration:
```
slices:
- sources:
- model: Xwin-LM/Xwin-LM-70B-V0.1
layer_range: [0,12]
- sources:
- model: Sao10K/Euryale-1.3-L2-70B
layer_range: [9,14]
- sources:
- model: Xwin-LM/Xwin-LM-70B-V0.1
layer_range: [12,62]
- sources:
- model: Sao10K/Euryale-1.3-L2-70B
layer_range: [54,71]
- sources:
- model: Xwin-LM/Xwin-LM-70B-V0.1
layer_range: [62,80]
merge_method: passthrough
dtype: float16
---
slices:
- sources:
- model: garage-bAInd/Platypus2-70B-instruct
layer_range: [0,12]
- sources:
- model: Sao10K/WinterGoddess-1.4x-70B-L2
layer_range: [9,14]
- sources:
- model: garage-bAInd/Platypus2-70B-instruct
layer_range: [12,62]
- sources:
- model: Sao10/WinterGoddess-1.4x-70B-L2
layer_range: [54,71]
- sources:
- model: garage-bAInd/Platypus2-70B-instruct
layer_range: [62,80]
merge_method: passthrough
dtype: float16
---
models:
- model: llmixer/BigWeave-v8-90b
parameters:
weight: 0.5
density: 0.25
merge_method: dare_ties
base_model: llmixer/BigWeave-v6-90b
dtype: float16
```
# Acknowledgements
[@Xwin-LM](https://huggingface.co/Xwin-LM) For creating Xwin
[@Sao10K](https://huggingface.co/Sao10K) For creating Euryale and WinterGoddess
[@garage-bAInd](https://huggingface.co/garage-bAInd) For creating Platypus2
[@alpindale](https://huggingface.co/alpindale) For creating the original Goliath
[@chargoddard](https://huggingface.co/chargoddard) For developing [mergekit](https://github.com/cg123/mergekit).
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_llmixer__BigWeave-v12-90b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.19|
|AI2 Reasoning Challenge (25-Shot)|68.09|
|HellaSwag (10-Shot) |87.70|
|MMLU (5-Shot) |69.41|
|TruthfulQA (0-shot) |61.35|
|Winogrande (5-shot) |81.22|
|GSM8k (5-shot) |47.38|
|
Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test | Josephgflowers | 2024-03-09T13:54:50Z | 725 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-27T13:56:19Z | ---
license: mit
widget:
- text: '<|system|>
You are a helpful assistant</s>
<|user|>
Can you explain to me how quantum computing works?</s>
<|assistant|>
'
model-index:
- name: Tinyllama-Cinder-1.3B-Reason-Test
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 34.56
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 58.24
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 39.93
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 4.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/Tinyllama-Cinder-1.3B-Reason-Test
name: Open LLM Leaderboard
---
1.3B test of two Cinder models merged layers 1-22 and 18-22, trained on math and step by step reasoning. Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. It is built on the TinyLlama 1.1B parameter model and trained on a unique combination of datasets. Testing on Reason-with-cinder dataset.

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__Tinyllama-Cinder-1.3B-Reason-Test)
| Metric |Value|
|---------------------------------|----:|
|Avg. |37.88|
|AI2 Reasoning Challenge (25-Shot)|34.56|
|HellaSwag (10-Shot) |58.24|
|MMLU (5-Shot) |25.79|
|TruthfulQA (0-shot) |39.93|
|Winogrande (5-shot) |63.93|
|GSM8k (5-shot) | 4.85|
|
jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES | jsfs11 | 2024-03-03T00:06:04Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"FelixChao/WestSeverus-7B-DPO-v2",
"CultriX/Wernicke-7B-v9",
"mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-31T00:04:51Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- CultriX/Wernicke-7B-v9
- mlabonne/NeuralBeagle14-7B
model-index:
- name: RandomMergeNoNormWEIGHTED-7B-DARETIES
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.5
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.5
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.28
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES
name: Open LLM Leaderboard
---
# RandomMergeNoNormWEIGHTED-7B-DARETIES
RandomMergeNoNormWEIGHTED-7B-DARETIES is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [CultriX/Wernicke-7B-v9](https://huggingface.co/CultriX/Wernicke-7B-v9)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## 🧩 Configuration
```yaml
models:
- model: FelixChao/WestSeverus-7B-DPO-v2
# No parameters necessary for base model
- model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
density: [1, 0.7, 0.1]
weight: [0, 0.3, 0.7, 1]
- model: CultriX/Wernicke-7B-v9
parameters:
density: [1, 0.7, 0.3]
weight: [0, 0.25, 0.5, 1]
- model: mlabonne/NeuralBeagle14-7B
parameters:
density: 0.25
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
int8_mask: true
normalize: true
sparsify:
- filter: mlp
value: 0.5
- filter: self_attn
value: 0.5
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jsfs11__RandomMergeNoNormWEIGHTED-7B-DARETIES)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.36|
|AI2 Reasoning Challenge (25-Shot)|73.38|
|HellaSwag (10-Shot) |88.50|
|MMLU (5-Shot) |64.94|
|TruthfulQA (0-shot) |71.50|
|Winogrande (5-shot) |83.58|
|GSM8k (5-shot) |70.28|
|
g-ronimo/phi-2-OpenHermes-2.5 | g-ronimo | 2024-02-05T17:33:27Z | 725 | 10 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"custom_code",
"dataset:teknium/OpenHermes-2.5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-03T06:14:40Z | ---
library_name: transformers
license: mit
datasets:
- teknium/OpenHermes-2.5
---
# microsoft/phi-2 + teknium/OpenHermes-2.5
## Training
* QLoRA rank 32, LR 2e-5, 1 epoch
* effective batch size: 200
* max. seq. length: 1024 tokens
* code in code/
## Evals
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|----------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[g-ronimo/phi-2-OpenHermes-2.5](https://huggingface.co/g-ronimo/phi-2-OpenHermes-2.5)| 30.27| 71.18| 43.87| 35.9| 45.3|
|[minghaowu/phi-2-OpenHermes-2.5](https://huggingface.co/minghaowu/phi-2-OpenHermes-2.5)| 27.95| 67.55| 48.07| 36.17| 44.94|
|[phi-2](https://huggingface.co/microsoft/phi-2)| 27.96| 70.84| 44.46| 35.17| 44.61|
## Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
modelpath="g-ronimo/phi-2-OpenHermes-2.5"
model = AutoModelForCausalLM.from_pretrained(
modelpath,
torch_dtype=torch.bfloat16,
device_map="auto",
# attn_implementation="flash_attention_2",
)
tokenizer = AutoTokenizer.from_pretrained(modelpath)
messages = [
{"role": "system", "content": "answer like a pirate"},
{"role": "user", "content": "what does it mean to be successful?"},
]
input_tokens = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to("cuda")
output_tokens = model.generate(input_tokens, max_new_tokens=500)
output = tokenizer.decode(output_tokens[0])
print(output)
```
>Ahoy there, matey! To me, being successful means having the wind in your sails and reaching the treasure you've been dreaming of. It's about setting sail on a journey with clear goals, working hard, facing challenges head-on, and never losing sight of what truly matters. So, set your compass right, hoist your Jolly Roger high, and let's embark on this adventure together! ⚓️💰⛵️
|
BarraHome/rezephyr_merged_4bit | BarraHome | 2024-02-05T19:57:50Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"text-classification",
"en",
"dataset:yahma/alpaca-cleaned",
"base_model:unsloth/zephyr-sft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-classification | 2024-02-05T17:49:33Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/zephyr-sft
datasets:
- yahma/alpaca-cleaned
pipeline_tag: text-classification
---
# Uploaded model
- **Developed by:** BarraHome
- **License:** apache-2.0
- **Finetuned from model :** unsloth/zephyr-sft
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
g-ronimo/phi-2-OpenHermes-2.5-v2 | g-ronimo | 2024-03-08T14:43:32Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"custom_code",
"dataset:teknium/OpenHermes-2.5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-07T05:12:28Z | ---
library_name: transformers
license: mit
datasets:
- teknium/OpenHermes-2.5
---
# microsoft/phi-2 + teknium/OpenHermes-2.5
## Training
* QLoRA rank 32, LR 5e-5, 3 epochs
* batch size: 200
* max. seq. length: 1024 tokens
## Evals
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|--------------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[phi-2-OpenHermes-2.5-v2](https://huggingface.co/g-ronimo/phi-2-OpenHermes-2.5-LR5e-5-ep3)| 32.02| 70.56| 44.76| 36.85| 46.05|
|[phi-2-OpenHermes-2.5](https://huggingface.co/g-ronimo/phi-2-OpenHermes-2.5)| 30.27| 71.18| 43.87| 35.9| 45.3|
|[minghaowu/phi-2-OpenHermes-2.5](https://huggingface.co/minghaowu/phi-2-OpenHermes-2.5)| 27.95| 67.55| 48.07| 36.17| 44.94|
|[phi-2](https://huggingface.co/microsoft/phi-2)| 27.96| 70.84| 44.46| 35.17| 44.61|
## Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
modelpath="g-ronimo/phi-2-OpenHermes-2.5-v2"
model = AutoModelForCausalLM.from_pretrained(
modelpath,
torch_dtype=torch.bfloat16,
device_map="auto",
# attn_implementation="flash_attention_2",
)
tokenizer = AutoTokenizer.from_pretrained(modelpath)
messages = [
{"role": "system", "content": "answer like a pirate"},
{"role": "user", "content": "what does it mean to be successful?"},
]
input_tokens = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to("cuda")
output_tokens = model.generate(input_tokens, max_new_tokens=500)
output = tokenizer.decode(output_tokens[0])
print(output)
```
>Ahoy there, matey! To me, being successful means having the wind in your sails and reaching the treasure you've been dreaming of. It's about setting sail on a journey with clear goals, working hard, facing challenges head-on, and never losing sight of what truly matters. So, set your compass right, hoist your Jolly Roger high, and let's embark on this adventure together! ⚓️💰⛵️
|
FelixChao/Capricorn-7B | FelixChao | 2024-02-14T17:11:00Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T16:55:59Z | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kukedlc/Neural-Cosmic-Boy-7B-slerp | Kukedlc | 2024-05-25T17:09:33Z | 725 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/Neural-Cosmic-7B-slerp",
"Kukedlc/NeuralLogic-7B-V",
"Kukedlc/SuperCombo",
"base_model:Kukedlc/Neural-Cosmic-7B-slerp",
"base_model:Kukedlc/NeuralLogic-7B-V",
"base_model:Kukedlc/SuperCombo",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T23:07:34Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/Neural-Cosmic-7B-slerp
- Kukedlc/NeuralLogic-7B-V
- Kukedlc/SuperCombo
base_model:
- Kukedlc/Neural-Cosmic-7B-slerp
- Kukedlc/NeuralLogic-7B-V
- Kukedlc/SuperCombo
model-index:
- name: Neural-Cosmic-Boy-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Neural-Cosmic-Boy-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Neural-Cosmic-Boy-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.92
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Neural-Cosmic-Boy-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.1
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Neural-Cosmic-Boy-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Neural-Cosmic-Boy-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/Neural-Cosmic-Boy-7B-slerp
name: Open LLM Leaderboard
---
## Note: The merge method is ties, not slerp.
# Neural-Cosmic-Boy-7B-slerp

Neural-Cosmic-Boy-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/Neural-Cosmic-7B-slerp](https://huggingface.co/Kukedlc/Neural-Cosmic-7B-slerp)
* [Kukedlc/NeuralLogic-7B-V](https://huggingface.co/Kukedlc/NeuralLogic-7B-V)
* [Kukedlc/SuperCombo](https://huggingface.co/Kukedlc/SuperCombo)
## 🧩 Configuration
```yaml
models:
- model: Kukedlc/Neural-Cosmic-7B-slerp
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: Kukedlc/NeuralLogic-7B-V
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: Kukedlc/SuperCombo
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/Neural-Cosmic-Boy-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__Neural-Cosmic-Boy-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.08|
|AI2 Reasoning Challenge (25-Shot)|70.48|
|HellaSwag (10-Shot) |87.65|
|MMLU (5-Shot) |64.92|
|TruthfulQA (0-shot) |67.10|
|Winogrande (5-shot) |82.00|
|GSM8k (5-shot) |72.33|
|
alnrg2arg/blockchainlabs_tinyllama_fusion_LHK_yunkong_v2 | alnrg2arg | 2024-02-19T04:15:16Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-19T02:37:24Z | ---
license: mit
---
This model is based on the fusion strategy offered by Fanqi Wan(https://github.com/fanqiwan/FuseLLM).
Three models are fused together. 10epochs
Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
Blending model 1: HanNayeoniee/LHK_DPO_v1
Blending model 2: yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
This model will be optimized by Laser and DPO later.
This project is to make the on-device sLM. We are doing experiments on the models. |
sethuiyer/CodeCalc-Mistral-7B | sethuiyer | 2024-03-07T06:27:07Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"base_model:uukuguy/speechless-code-mistral-7b-v1.0",
"base_model:upaya07/Arithmo2-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-19T08:50:59Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- uukuguy/speechless-code-mistral-7b-v1.0
- upaya07/Arithmo2-Mistral-7B
pipeline_tag: text-generation
model-index:
- name: sethuiyer/CodeCalc-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 47.49
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/CodeCalc-Mistral-7B
name: Open LLM Leaderboard
---
# CodeCalc-Mistral-7B
<p align="center">
<img src="https://huggingface.co/sethuiyer/CodeCalc-Mistral-7B/resolve/main/codecalc.webp" height="128px" alt="CodeCalc">
</p>
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: uukuguy/speechless-code-mistral-7b-v1.0
dtype: bfloat16
merge_method: ties
models:
- model: uukuguy/speechless-code-mistral-7b-v1.0
- model: upaya07/Arithmo2-Mistral-7B
parameters:
density: [0.25, 0.35, 0.45, 0.35, 0.25]
weight: [0.1, 0.25, 0.5, 0.25, 0.1]
parameters:
int8_mask: true
```
### Evaluation
| T | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|----|---------------------------------------------|---------|------|-----------|-------|------------|------------|-------|
| 🔍 | sethuiyer/CodeCalc-Mistral-7B | 66.33 | 61.95| 83.64 | 62.78 | 47.79 | 78.3 | 63.53 |
| 📉 | uukuguy/speechless-code-mistral-7b-v1.0 | 63.6 | 61.18| 83.77 | 63.4 | 47.9 | 78.37 | 47.01 |
The merge appears to be successful, especially considering the substantial improvement in the GSM8K benchmark while maintaining comparable performance on other metrics.
## Usage
Alpaca Instruction Format and [Divine Intellect](https://raw.githubusercontent.com/oobabooga/text-generation-webui/ae8cd449ae3e0236ecb3775892bb1eea23f9ed68/presets/Divine%20Intellect.yaml) preset.
```
You are an intelligent programming assistant.
### Instruction:
Implement a linked list in C++
### Response:
```
Preset:
```text
temperature: 1.31
top_p: 0.14
repetition_penalty: 1.17
top_k: 49
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__CodeCalc-Mistral-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |66.33|
|AI2 Reasoning Challenge (25-Shot)|61.95|
|HellaSwag (10-Shot) |83.64|
|MMLU (5-Shot) |62.78|
|TruthfulQA (0-shot) |47.79|
|Winogrande (5-shot) |78.30|
|GSM8k (5-shot) |63.53|
|
gmonsoon/Qwenchana-1.8B | gmonsoon | 2024-03-04T12:57:02Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-24T22:23:48Z | ---
license: other
library_name: transformers
model-index:
- name: Qwenchana-1.8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 38.23
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/Qwenchana-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 59.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/Qwenchana-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/Qwenchana-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 39.58
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/Qwenchana-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/Qwenchana-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 19.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/Qwenchana-1.8B
name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_gmonsoon__Qwenchana-1.8B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |43.83|
|AI2 Reasoning Challenge (25-Shot)|38.23|
|HellaSwag (10-Shot) |59.92|
|MMLU (5-Shot) |45.78|
|TruthfulQA (0-shot) |39.58|
|Winogrande (5-shot) |60.30|
|GSM8k (5-shot) |19.18|
|
Locutusque/Hyperion-1.5-Mistral-7B | Locutusque | 2024-03-07T18:30:24Z | 725 | 9 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:Locutusque/hyperion-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-02T19:22:02Z | ---
license: apache-2.0
library_name: transformers
tags:
- conversational
datasets:
- Locutusque/hyperion-v1.5
model-index:
- name: Hyperion-1.5-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hyperion-1.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hyperion-1.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hyperion-1.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.78
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hyperion-1.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hyperion-1.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 40.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hyperion-1.5-Mistral-7B
name: Open LLM Leaderboard
---
# Model Card for Locutusque/Hyperion-1.5-Mistral-7B

## Model Details
**Model Name**: Locutusque/Hyperion-1.5-Mistral-7B
**Base Model**: mistralai/Mistral-7B-v0.1
**Publisher**: M4-ai
**Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning.
**Language**: Multi-domain, English language.
**License**: Apache-2.0
## Model Description
`Locutusque/Hyperion-1.5-Mistral-7B` is a state-of-the-art language model fine-tuned on the Hyperion dataset for advanced reasoning across scientific domains. This model is designed to handle complex inquiries and instructions, leveraging the diverse and rich information contained in the Hyperion dataset. Its primary use cases include but are not limited to complex question answering, conversational understanding, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.
## Intended Use
This model is intended for researchers and practitioners looking for a powerful tool to tackle challenging problems in scientific domains. It can be used in the following scenarios:
- AI-driven tutoring systems for science, medicine, mathematics, and computer science.
- Assistive tools for professionals requiring fast and accurate domain-specific information retrieval.
- Platforms that require conversational AI capabilities with a focus on technical and scientific reasoning.
- Automation in code generation and understanding complex programming context.
## Training Data
The `Locutusque/Hyperion-1.5-Mistral-7B` model was fine-tuned on the Hyperion-v1.5 dataset, which amalgamates various datasets rich in diversity and complexity, including programming, medical texts, mathematical problems, and reasoning tasks.
## Evaluation Results
Coming soon...
## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Locutusque/Hyperion-1.5-Mistral-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# For a text generation task
input_text = "<|im_start|>user\nWhat are the implications of Einstein's theory of relativity in modern physics?<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate a response
outputs = model.generate(input_ids, max_length=200, num_return_sequences=1, temperature=0.8, top_p=0.95, top_k=40, repetition_penalty=1.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Known Limitations
The diversity of the dataset could lead to inconsistencies in the model's responses due to variations in data formatting and annotation quality.
## Licensing Information
This model is released under the Apache-2.0 license.
## Citation Information
If you use Locutusque/Hyperion-1.5-Mistral-7B in your research, please cite the Hyperion dataset as follows:
```
@misc{sebastian_gabarain_2024,
title = {Hyperion-1.5: Illuminating the Path to Advanced Reasoning with a High-Quality, Multidisciplinary Question Answering Dataset},
author = {Sebastian Gabarain},
publisher = {HuggingFace},
year = {2024},
url = {https://huggingface.co/datasets/Locutusque/hyperion-v1.5}
}
```
## Quants
exl2 and GGUF by bartowski - https://huggingface.co/bartowski/Hyperion-1.5-Mistral-7B-exl2 https://huggingface.co/bartowski/Hyperion-1.5-Mistral-7B-GGUF
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__Hyperion-1.5-Mistral-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.43|
|AI2 Reasoning Challenge (25-Shot)|60.49|
|HellaSwag (10-Shot) |83.64|
|MMLU (5-Shot) |63.57|
|TruthfulQA (0-shot) |41.78|
|Winogrande (5-shot) |78.61|
|GSM8k (5-shot) |40.49|
|
vishnukv/newmerge | vishnukv | 2024-03-05T11:47:28Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:PetroGPT/WestSeverus-7B-DPO",
"base_model:uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b",
"base_model:vishnukv/WestSeverusJaskier",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-05T11:42:54Z | ---
base_model:
- PetroGPT/WestSeverus-7B-DPO
- uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b
- vishnukv/WestSeverusJaskier
library_name: transformers
tags:
- mergekit
- merge
license: mit
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b](https://huggingface.co/uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b) as a base.
### Models Merged
The following models were included in the merge:
* [PetroGPT/WestSeverus-7B-DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO)
* [vishnukv/WestSeverusJaskier](https://huggingface.co/vishnukv/WestSeverusJaskier)
|
Kukedlc/NeuralExperiment-7b-MagicCoder-v7.5 | Kukedlc | 2024-03-07T03:43:54Z | 725 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-07T00:48:06Z | ---
license: apache-2.0
datasets:
- microsoft/orca-math-word-problems-200k
- ise-uiuc/Magicoder-Evol-Instruct-110K
- Vezora/Tested-22k-Python-Alpaca
---
# Datacard for Custom Trained Model
- Base Model : [Kukedlc/NeuralExperiment-7b-dare-ties](https://huggingface.co/Kukedlc/NeuralExperiment-7b-dare-ties)
## Model Description
This model is an experimental AI trained on three distinct datasets focusing on logical reasoning, mathematics, and programming. The training process involved fine-tuning from the last layer (31) backward with a gradually decreasing learning rate. The primary goal is to address and rectify the common 'INSTINST' bug observed in leaderboard models through targeted training on the latest layers.
## Datasets Used for Training
- `microsoft/orca-math-word-problems-200k`: A large-scale dataset of mathematical word problems aimed at enhancing the model's numerical reasoning and problem-solving capabilities.
- `ise-uiuc/Magicoder-Evol-Instruct-110K`: A dataset designed to improve code generation and understanding, contributing to the model's programming language proficiency.
- `sahil2801/CodeAlpaca-20k`: A dataset focused on programming challenges to further refine the model's coding and logical reasoning skills.
Each dataset contributed 20,000 data points to the training process, ensuring a balanced representation of logic, mathematics, and programming tasks.
## Training Environment
- The model was trained on Kaggle's free GPU environment, allowing for cost-effective fine-tuning and experimentation.
- Users interested in replicating or extending this training can find the Kaggle notebook in my profile or request it directly for collaborative purposes.
## Preliminary Results
- The model shows promising results in solving logical puzzles and mathematical problems, especially those with misleading or non-obvious solutions that it initially struggled with.
- Ongoing experiments aim to quantify the impact of targeted training on the model's reasoning capabilities across different domains.
## Invitation for Collaboration
- Feedback, suggestions, and collaborative efforts are highly encouraged to further refine and evaluate the model.
- If interested in contributing or experimenting with this model, please feel free to reach out or access the code directly from my Kaggle profile.
## Contact Information
- For any inquiries, suggestions, or collaboration proposals, please contact me!
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralExperiment-7b-MagicCoder-v7"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```

|
QueryloopAI/MonarchCoder-7B | QueryloopAI | 2024-04-03T15:59:17Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"merge",
"mergekit",
"lazymergekit",
"Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0",
"mlabonne/AlphaMonarch-7B",
"base_model:Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0",
"base_model:mlabonne/AlphaMonarch-7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | feature-extraction | 2024-03-09T06:56:24Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0
- mlabonne/AlphaMonarch-7B
base_model:
- Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0
- mlabonne/AlphaMonarch-7B
model-index:
- name: MonarchCoder-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.65
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.21
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/MonarchCoder-7B
name: Open LLM Leaderboard
---
# MonarchCoder-7B
MonarchCoder-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0](https://huggingface.co/Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0)
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0
layer_range: [0, 32]
- model: mlabonne/AlphaMonarch-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/AlphaMonarch-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "QueryloopAI/MonarchCoder-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abideen__MonarchCoder-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.17|
|AI2 Reasoning Challenge (25-Shot)|68.52|
|HellaSwag (10-Shot) |87.30|
|MMLU (5-Shot) |64.65|
|TruthfulQA (0-shot) |61.21|
|Winogrande (5-shot) |80.19|
|GSM8k (5-shot) |65.13|
|
automerger/Experiment27Pastiche-7B | automerger | 2024-03-11T11:24:38Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:CorticalStack/pastiche-crown-clown-7b-dare-dpo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T16:05:17Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- CorticalStack/pastiche-crown-clown-7b-dare-dpo
---
# Experiment27Pastiche-7B
Experiment27Pastiche-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [CorticalStack/pastiche-crown-clown-7b-dare-dpo](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare-dpo)
## 🧩 Configuration
```yaml
models:
- model: yam-peleg/Experiment27-7B
# No parameters necessary for base model
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: yam-peleg/Experiment27-7B
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment27Pastiche-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
vishnukv/WestSeverusJaskier-OpenOrca | vishnukv | 2024-03-12T03:26:30Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:Open-Orca/OpenOrca",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-11T22:54:52Z |
---
license: mit
datasets:
- Open-Orca/OpenOrca
---
---
library_name: peft
base_model: models--vishnukv--WestSeverusJaskier/snapshots/c36fc5adc83cce1229db9ae808dab4e0d5521212
---
## Model Details
- **Developed by:** [VishnuKV]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [MIT]
- **Finetuned from model [optional]:** [base_model: models--vishnukv--WestSeverusJaskier/snapshots/c36fc5adc83cce1229db9ae808dab4e0d5521212]
|
0-hero/Matter-0.1-Slim-7B-C | 0-hero | 2024-04-03T08:37:19Z | 725 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-15T05:39:43Z | ---
license: apache-2.0
---
## Matter 7B (Mistral 7B Finetune)
Matter 7B is contunied full-finetuned on the [slim-C version of the Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1-Slim-C), which is curated from over 35 datsets analyzing >6B tokens
### Training
It took ~17 hours to train 3 epochs on 4x A100s (80GB) using Axolotl for a full-finetune
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
juhwanlee/gemma-7B-alpaca-case-3-3 | juhwanlee | 2024-03-26T06:19:44Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T14:54:00Z | ---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Juhwan Lee
* Model Type: Large Language Model
# Model Architecture
This model is based on Gemma-7B. We fine-tuning this model for data ordering task.
Gemma-7B is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample Open-Orca dataset. (We finetune the 100,000 dataset)
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0 |
Gille/StrangeMerges_53-7B-model_stock | Gille | 2024-04-03T09:13:48Z | 725 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-01T23:44:10Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
model-index:
- name: StrangeMerges_53-7B-model_stock
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.78
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_53-7B-model_stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_53-7B-model_stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_53-7B-model_stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 73.86
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_53-7B-model_stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_53-7B-model_stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.71
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_53-7B-model_stock
name: Open LLM Leaderboard
---
# StrangeMerges_53-7B-model_stock
StrangeMerges_53-7B-model_stock is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: Gille/StrangeMerges_52-7B-dare_ties
- model: rwitz/experiment26-truthy-iter-0
- model: Gille/StrangeMerges_32-7B-slerp
- model: AurelPx/Percival_01-7b-slerp
- model: Kukedlc/NeuralMaths-Experiment-7b
merge_method: model_stock
base_model: Gille/StrangeMerges_52-7B-dare_ties
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_53-7B-model_stock"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_53-7B-model_stock)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.07|
|AI2 Reasoning Challenge (25-Shot)|72.78|
|HellaSwag (10-Shot) |88.46|
|MMLU (5-Shot) |64.97|
|TruthfulQA (0-shot) |73.86|
|Winogrande (5-shot) |83.66|
|GSM8k (5-shot) |72.71|
|
Ppoyaa/StarMonarch-7B | Ppoyaa | 2024-04-03T19:51:00Z | 725 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"en",
"base_model:mlabonne/AlphaMonarch-7B",
"base_model:Nexusflow/Starling-LM-7B-beta",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-03T08:32:01Z | ---
tags:
- merge
- mergekit
- lazymergekit
base_model:
- mlabonne/AlphaMonarch-7B
- Nexusflow/Starling-LM-7B-beta
license: apache-2.0
language:
- en
---
# StarMonarch-7B

# Description
StarMonarch-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
This model uses a context window of 8k. Special thanks to mlabonne and Nexusflow for the models.
## 🏆 Open LLM Leaderboard Evaluation Results
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.45|
|AI2 Reasoning Challenge (25-Shot)|71.25|
|HellaSwag (10-Shot) |87.00|
|MMLU (5-Shot) |65.48|
|TruthfulQA (0-shot) |67.20|
|Winogrande (5-shot) |82.16|
|GSM8k (5-shot) |73.62|
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/AlphaMonarch-7B
layer_range: [0, 32]
- model: Nexusflow/Starling-LM-7B-beta
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/AlphaMonarch-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/StarMonarch-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0 | LeroyDyer | 2024-04-09T16:10:37Z | 725 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"megamerge",
"code",
"Cyber-Series",
"conversational",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:gate369/Alpaca-Star",
"dataset:gate369/alpaca-star-ascii",
"base_model:LeroyDyer/Mixtral_AI_Multi_TEST",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-04T07:28:02Z | ---
base_model:
- LeroyDyer/Mixtral_AI_Multi_TEST
- LeroyDyer/Mixtral_AI_Cyber_Dolphin_2.0
- LeroyDyer/Mixtral_AI_CyberLAW
- LeroyDyer/Mixtral_AI_CyberBrain_3_0
- LeroyDyer/Mixtral_AI_Cyber_5.0
- LeroyDyer/Mixtral_AI_CyberBrain_2.0
- ezelikman/quietstar-8-ahead
- KoboldAI/Mistral-7B-Erebus-v3
library_name: transformers
tags:
- mergekit
- megamerge
- code
- Cyber-Series
license: mit
language:
- en
datasets:
- Open-Orca/OpenOrca
- cognitivecomputations/dolphin
- WhiteRabbitNeo/WRN-Chapter-2
- WhiteRabbitNeo/WRN-Chapter-1
- gate369/Alpaca-Star
- gate369/alpaca-star-ascii
---
Currently undegoing Fine tuning ! as this model contains all Previous models !
This model contains many hidden tensors :
As it was emrged with many lora adapter for various task such as vision and sound .
The problem was that for some reason i could not get the extra heads to show up like other models.
such as the llava model ... i suppose this model can change the config.json to be a llava model and yes ! it works! ie it can think and has hidden think heads ? but you need to config it up !, It has vision heads but also i could not set the config up !
so hidden talents:
It was also merged with the mothers of these models for QUiet(thoughts) and (llava vision etc ) so the tensors are there . i just did not understand how to fine tne the addtional funcitonalitys. as they need a single trainign example to populate the hidden tensor hence te merges. and yet when the model is put in train mode , ie by setting the model after loading to model.TRAIN ... the tensors apear waiting for training so just add a peft and start the training!
THIS VERSION HAS BEEN UPDATED TO INCLUDE CYBERBRAIN ! (Hidden Tensors)
## Extended capabilities:
* mistralai/Mistral-7B-Instruct-v0.1 - Prime-Base
* ChaoticNeutrals/Eris-LelantaclesV2-7b - role play
* ChaoticNeutrals/Eris_PrimeV3-Vision-7B - vision
* rvv-karma/BASH-Coder-Mistral-7B - coding
* Locutusque/Hercules-3.1-Mistral-7B - Unhinging
* KoboldAI/Mistral-7B-Erebus-v3 - NSFW
* Locutusque/Hyperion-2.1-Mistral-7B - CHAT
* Severian/Nexus-IKM-Mistral-7B-Pytorch - Thinking
* NousResearch/Hermes-2-Pro-Mistral-7B - Generalizing
* mistralai/Mistral-7B-Instruct-v0.2 - BASE
* Nitral-AI/ProdigyXBioMistral_7B - medical
* Nitral-AI/Infinite-Mika-7b - 128k - Context Expansion enforcement
* Nous-Yarn-Mistral-7b-128k - 128k - Context Expansion
* yanismiraoui/Yarn-Mistral-7b-128k-sharded
* ChaoticNeutrals/Eris_Prime-V2-7B - Roleplay
This Expert is a companon to the MEGA_MIND 24b CyberSeries represents a groundbreaking leap in the realm of language models, integrating a diverse array of expert models into a unified framework. At its core lies the Mistral-7B-Instruct-v0.2, a refined instructional model designed for versatility and efficiency.
Enhanced with an expanded context window and advanced routing mechanisms, the Mistral-7B-Instruct-v0.2 exemplifies the power of Mixture of Experts, allowing seamless integration of specialized sub-models. This architecture facilitates unparalleled performance and scalability, enabling the CyberSeries to tackle a myriad of tasks with unparalleled speed and accuracy.
Among its illustrious sub-models, the OpenOrca - Mistral-7B-8k shines as a testament to fine-tuning excellence, boasting top-ranking performance in its class. Meanwhile, the Hermes 2 Pro introduces cutting-edge capabilities such as Function Calling and JSON Mode, catering to diverse application needs.
Driven by Reinforcement Learning from AI Feedback, the Starling-LM-7B-beta demonstrates remarkable adaptability and optimization, while the Phi-1.5 Transformer model stands as a beacon of excellence across various domains, from common sense reasoning to medical inference.
With models like BioMistral tailored specifically for medical applications and Nous-Yarn-Mistral-7b-128k excelling in handling long-context data, the MEGA_MIND 24b CyberSeries emerges as a transformative force in the landscape of language understanding and artificial intelligence.
Experience the future of language models with the MEGA_MIND 24b CyberSeries, where innovation meets performance, and possibilities are limitless.
### Models Merged
The following models were included in the merge:
* [LeroyDyer/Mixtral_AI_Multi_TEST](https://huggingface.co/LeroyDyer/Mixtral_AI_Multi_TEST)
* [LeroyDyer/Mixtral_AI_CyberLAW](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberLAW)
* [LeroyDyer/Mixtral_AI_CyberBrain_3_0](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberBrain_3_0)
* [LeroyDyer/Mixtral_AI_Cyber_5.0](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_5.0)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: LeroyDyer/Mixtral_AI_Cyber_Dolphin_2.0
parameters:
density: [0.256, 0.512, 0.128] # density gradient
weight: 0.382
- model: LeroyDyer/Mixtral_AI_CyberLAW
parameters:
density: 0.382
weight: [0.256, 0.128, 0.256, 0.128] # weight gradient
- model: LeroyDyer/Mixtral_AI_CyberBrain_3_0
parameters:
density: 0.382
weight: [0.128, 0.512, 0.128, 0.128] # weight gradient
- model: LeroyDyer/Mixtral_AI_Multi_TEST
parameters:
density: 0.382
weight: [0.128, 0.512, 0.128, 0.128] # weight gradient
- model: LeroyDyer/Mixtral_AI_Cyber_5.0
parameters:
density: 0.382
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: LeroyDyer/Mixtral_AI_Cyber_Dolphin_2.0
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.