modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
QuantFactory/Llama-3-Alpha-Ko-8B-Evo-GGUF | QuantFactory | 2024-06-04T09:15:22Z | 725 | 1 | null | [
"gguf",
"text-generation",
"ko",
"base_model:allganize/Llama-3-Alpha-Ko-8B-Evo",
"license:other",
"region:us"
] | text-generation | 2024-06-02T12:09:44Z | ---
license: other
license_name: llama3
language:
- ko
base_model: allganize/Llama-3-Alpha-Ko-8B-Evo
pipeline_tag: text-generation
---
# QuantFactory/Llama-3-Alpha-Ko-8B-Evo-GGUF
This is quantized version of [allganize/Llama-3-Alpha-Ko-8B-Evo](https://huggingface.co/allganize/Llama-3-Alpha-Ko-8B-Evo) created using llama.cpp
# Model Description
> This is the "Evo" model, which serves as a base(kind of) for fine-tuning to various tasks. For general chat purposes, use the Alpha-Instruct model.
We are thrilled to introduce **Alpha-Instruct**, our latest language model, which demonstrates exceptional capabilities in both Korean and English. Alpha-Instruct is developed using the **Evolutionary Model Merging** technique, enabling it to excel in complex language tasks and logical reasoning.
A key aspect of Alpha-Instruct's development is our **community-based approach**. We draw inspiration and ideas from various communities, shaping our datasets, methodologies, and the model itself. In return, we are committed to sharing our insights with the community, providing detailed information on the data, methods, and models used in Alpha-Instruct's creation.
Alpha-Instruct has achieved outstanding performance on the **LogicKor, scoring an impressive 6.60**. Remarkably, this performance rivals that of 70B models, showcasing the efficiency and power of our 8B model. This achievement highlights Alpha-Instruct's advanced computational and reasoning skills, making it a leading choice for diverse and demanding language tasks.
**For more information and technical details about Alpha-Instruct, stay tuned to our updates and visit our [website](https://allganize-alpha.github.io/) (Soon).**
---
## Overview
Alpha-Instruct is our latest language model, developed using 'Evolutionary Model Merging' technique. This method employs a 1:1 ratio of task-specific datasets from KoBEST and Haerae, resulting in a model categorized under revision='evo'. The following models were used for merging:
- [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) (Base)
- [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) (Instruct)
- [Llama-3-Open-Ko-8B](beomi/Llama-3-Open-Ko-8B) (Continual Pretrained)
To refine and enhance Alpha-Instruct, we utilized a specialized dataset aimed at 'healing' the model's output, significantly boosting its human preference scores. The datasets* used include:
- [Korean-Human-Judgements](https://huggingface.co/datasets/HAERAE-HUB/Korean-Human-Judgements)
- [Orca-Math](https://huggingface.co/datasets/kuotient/orca-math-word-problems-193k-korean)
- [dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)
*Some of these datasets were partially used and translated for training, and we ensured there was no contamination during the evaluation process.
This approach effectively balances human preferences with the model's capabilities, making Alpha-Instruct well-suited for real-life scenarios where user satisfaction and performance are equally important. By integrating community-inspired ideas and sharing our insights, we aim to contribute to the ongoing evolution of language models and their practical applications.
## Benchmark Results
Results in [LogicKor](https://github.com/StableFluffy/LogicKor)* are as follows:
| Model | Single turn* | Multi turn* | Overall* |
|:------------------------------:|:------------:|:-----------:|:--------:|
| MLP-KTLim/llama-3-Korean-Bllossom-8B | 4.238 | 3.404 | 3.821 |
| Alpha-Ko-Evo | 5.143 | 5.238 | 5.190 |
| Alpha-Ko-Instruct (alt) | 7.095 | 6.571 | **6.833** |
| Alpha-Ko-Instruct | **7.143** | 6.048 | 6.600 |
| Alpha-Ko-Instruct-marlin (4bit) | 6.857 | 5.738 | 6.298 |
*Self report(Default settings with 'alpha' template, mean of 3).
Result in KoBEST(acc, num_shot=5) are as follows:
| Task | beomi/Llama-3-Open-Ko-8B-Instruct | maywell/Llama-3-Ko-8B-Instruct | Alpha-Ko-Evo | Alpha-Ko-Instruct(main) |
| --- | --- | --- | --- | --- |
| kobest overall | 0.6220 | 0.6852 |0.7229|0.7055
| kobest_boolq| 0.6254 | 0.7208 | 0.8547 | 0.8369
| kobest_copa| 0.7110 | 0.7650 | 0.7420 | 0.7420
| kobest_hellaswag| 0.3840 | 0.4440 | 0.4220 | 0.4240
| kobest_sentineg| 0.8388 | 0.9194 |0.9471 | 0.9244
| kobest_wic| 0.5738| 0.6040 |0.6095 | 0.5730
* 'Merged' models are chosen for reference
## How to use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "allganize/Llama-3-Alpha-Ko-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "system", "content": "당신은 인공지능 어시스턴트입니다. 묻는 말에 친절하고 정확하게 답변하세요."},
{"role": "user", "content": "피보나치 수열이 뭐야? 그리고 피보나치 수열에 대해 파이썬 코드를 짜줘볼래?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=False,
repetition_penalty=1.05,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Correspondence to
- Ji soo Kim ([email protected])
- Contributors
- Sangmin Jeon ([email protected])
- Seungwoo Ryu ([email protected])
## Special Thanks
- [@beomi](https://huggingface.co/beomi) for providing us with a great model!
## License
The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/) |
flax-sentence-embeddings/stackoverflow_mpnet-base | flax-sentence-embeddings | 2021-07-26T01:36:33Z | 724 | 5 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# stackoverflow_mpnet-base
This is a microsoft/mpnet-base model trained on 18,562,443 (title, body) pairs from StackOverflow.
SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) model and trained it using Siamese Network setup and contrastive learning objective. 18,562,443 (title, body) pairs from StackOverflow was used as training data. For this model, mean pooling of hidden states were used as sentence embeddings. See data_config.json and train_script.py in this respository how the model was trained and which datasets have been used.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it outputs a vector which captures
the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/stackoverflow_mpnet-base')
text = "Replace me by any question / answer you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base). Please refer to the model
card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We used 18,562,443 (title, body) pairs from StackOverflow as training data.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| StackOverflow title body pairs | - | 18,562,443 |
|
huggingartists/bladee | huggingartists | 2021-10-08T14:48:18Z | 724 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/bladee",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/bladee
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/1abf6ff09c7c4209c458e5937b088aba.640x640x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bladee</div>
<a href="https://genius.com/artists/bladee">
<div style="text-align: center; font-size: 14px;">@bladee</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Bladee.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/bladee).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/bladee")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/326nmhkf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Bladee's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/28bmutxl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/28bmutxl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/bladee')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/bladee")
model = AutoModelWithLMHead.from_pretrained("huggingartists/bladee")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/kishlak | huggingartists | 2021-09-29T17:46:52Z | 724 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/kishlak",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/kishlak
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/c0c7e74ec794ad44eb0957d6afdd383d.815x815x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Кишлак (Kishlak)</div>
<a href="https://genius.com/artists/kishlak">
<div style="text-align: center; font-size: 14px;">@kishlak</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Кишлак (Kishlak).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/kishlak).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/kishlak")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2654f8ic/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Кишлак (Kishlak)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/12gu37uv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/12gu37uv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/kishlak')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/kishlak")
model = AutoModelWithLMHead.from_pretrained("huggingartists/kishlak")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/wnbagirlfriend | huggingtweets | 2023-02-02T03:34:06Z | 724 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-02-02T03:32:42Z | ---
language: en
thumbnail: http://www.huggingtweets.com/wnbagirlfriend/1675308841393/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1427129645888114693/HsNIpekZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">jody</div>
<div style="text-align: center; font-size: 14px;">@wnbagirlfriend</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from jody.
| Data | jody |
| --- | --- |
| Tweets downloaded | 3120 |
| Retweets | 92 |
| Short tweets | 588 |
| Tweets kept | 2440 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/oghnr1wa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wnbagirlfriend's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/o9d6w49a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/o9d6w49a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wnbagirlfriend')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
JasperLS/gelectra-base-injection | JasperLS | 2023-05-08T14:33:47Z | 724 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-08T14:32:41Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gelectra-base-injection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gelectra-base-injection
This model is a fine-tuned version of [deepset/gelectra-base](https://huggingface.co/deepset/gelectra-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0940
- Accuracy: 0.9828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 69 | 0.2601 | 0.9397 |
| No log | 2.0 | 138 | 0.0940 | 0.9828 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TheBloke/Zarafusionex-1.1-L2-7B-GGUF | TheBloke | 2023-09-27T12:46:22Z | 724 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"llama2",
"base_model:zarakiquemparte/zarafusionex-1.1-l2-7b",
"license:other",
"text-generation-inference",
"region:us"
] | null | 2023-08-26T12:19:11Z | ---
license: other
tags:
- llama2
model_name: Zaraufsionex 1.1 L2 7B
base_model: zarakiquemparte/zarafusionex-1.1-l2-7b
inference: false
model_creator: Zaraki Quem Parte
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Zaraufsionex 1.1 L2 7B - GGUF
- Model creator: [Zaraki Quem Parte](https://huggingface.co/zarakiquemparte)
- Original model: [Zaraufsionex 1.1 L2 7B](https://huggingface.co/zarakiquemparte/zarafusionex-1.1-l2-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Zaraki Quem Parte's Zaraufsionex 1.1 L2 7B](https://huggingface.co/zarakiquemparte/zarafusionex-1.1-l2-7b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF)
* [Zaraki Quem Parte's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/zarakiquemparte/zarafusionex-1.1-l2-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Zaraki Quem Parte's Zaraufsionex 1.1 L2 7B](https://huggingface.co/zarakiquemparte/zarafusionex-1.1-l2-7b).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [zarafusionex-1.1-l2-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF/blob/main/zarafusionex-1.1-l2-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [zarafusionex-1.1-l2-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF/blob/main/zarafusionex-1.1-l2-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [zarafusionex-1.1-l2-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF/blob/main/zarafusionex-1.1-l2-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [zarafusionex-1.1-l2-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF/blob/main/zarafusionex-1.1-l2-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [zarafusionex-1.1-l2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF/blob/main/zarafusionex-1.1-l2-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [zarafusionex-1.1-l2-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF/blob/main/zarafusionex-1.1-l2-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [zarafusionex-1.1-l2-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF/blob/main/zarafusionex-1.1-l2-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [zarafusionex-1.1-l2-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF/blob/main/zarafusionex-1.1-l2-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [zarafusionex-1.1-l2-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF/blob/main/zarafusionex-1.1-l2-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [zarafusionex-1.1-l2-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF/blob/main/zarafusionex-1.1-l2-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [zarafusionex-1.1-l2-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF/blob/main/zarafusionex-1.1-l2-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [zarafusionex-1.1-l2-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF/blob/main/zarafusionex-1.1-l2-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Zarafusionex-1.1-L2-7B-GGUF and below it, a specific filename to download, such as: zarafusionex-1.1-l2-7b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Zarafusionex-1.1-L2-7B-GGUF zarafusionex-1.1-l2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Zarafusionex-1.1-L2-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Zarafusionex-1.1-L2-7B-GGUF zarafusionex-1.1-l2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m zarafusionex-1.1-l2-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Zarafusionex-1.1-L2-7B-GGUF", model_file="zarafusionex-1.1-l2-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Zaraki Quem Parte's Zaraufsionex 1.1 L2 7B
# Model Card: Zarafusionex 1.1 L2 7b
This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (53%) as a base with [Stable Beluga 7b](https://huggingface.co/stabilityai/StableBeluga-7B) (47%) and the result of this merge was merged with [LimaRP LLama2 7B Lora version of the day 07/23/2023](https://huggingface.co/lemonilia/limarp-llama2).
This merge of models(hermes and stable beluga) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py)
This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
Quantized Model by @TheBloke:
- [GGML](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGML)
- [GGUF](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF)
- [GPTQ](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GPTQ)
Merge illustration:

## Usage:
Since this is a merge between Nous Hermes, Stable Beluga and LimaRP, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
LimaRP instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
This model is not intended for supplying factual information or advice in any form
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
<!-- original-model-card end -->
|
maddes8cht/openlm-research-open_llama_3b_v2-gguf | maddes8cht | 2023-11-15T11:40:33Z | 724 | 1 | null | [
"gguf",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"region:us"
] | null | 2023-11-15T08:51:54Z | ---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# open_llama_3b_v2 - GGUF
- Model creator: [openlm-research](https://huggingface.co/openlm-research)
- Original model: [open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2)
OpenLlama is a free reimplementation of the original Llama Model which is licensed under Apache 2 license.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# OpenLLaMA: An Open Reproduction of LLaMA
**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
## v2 models
model_path = 'openlm-research/open_llama_3b_v2'
# model_path = 'openlm-research/open_llama_7b_v2'
## v1 models
# model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
# model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
## Dataset and Training
The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism [](https://engineering.fb.com/2021/07/15/open-source/fsdp/)(also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 3Bv2 | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
| ---------------------- | -------- | -------- | --------- | -------------- | -------------- | ------------ | ------------ | ------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.34 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.39 | 0.35 | 0.38 | 0.40 |
| arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.34 | 0.39 | 0.34 | 0.37 | 0.41 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.36 | 0.41 | 0.37 | 0.38 | 0.44 |
| arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.68 | 0.73 | 0.69 | 0.72 | 0.75 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.63 | 0.70 | 0.65 | 0.68 | 0.70 |
| boolq/acc | 0.66 | 0.75 | 0.71 | 0.66 | 0.72 | 0.68 | 0.71 | 0.75 |
| hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.52 | 0.56 | 0.49 | 0.53 | 0.56 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.70 | 0.75 | 0.67 | 0.72 | 0.76 |
| openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.26 | 0.30 | 0.27 | 0.30 | 0.31 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.38 | 0.41 | 0.40 | 0.40 | 0.43 |
| piqa/acc | 0.75 | 0.78 | 0.79 | 0.77 | 0.79 | 0.75 | 0.76 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.78 | 0.80 | 0.76 | 0.77 | 0.79 |
| record/em | 0.88 | 0.91 | 0.92 | 0.87 | 0.89 | 0.88 | 0.89 | 0.91 |
| record/f1 | 0.89 | 0.91 | 0.92 | 0.88 | 0.89 | 0.89 | 0.90 | 0.91 |
| rte/acc | 0.54 | 0.56 | 0.69 | 0.55 | 0.57 | 0.58 | 0.60 | 0.64 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.22 | 0.23 | 0.22 | 0.23 | 0.25 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.35 | 0.38 |
| wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 |
| winogrande/acc | 0.64 | 0.68 | 0.70 | 0.63 | 0.66 | 0.62 | 0.67 | 0.70 |
| Average | 0.52 | 0.55 | 0.57 | 0.53 | 0.56 | 0.53 | 0.55 | 0.57 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
phanerozoic/Tiny-Cowboy-1.1b-v0.1 | phanerozoic | 2024-03-27T18:41:54Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-20T12:34:46Z | ---
license: cc-by-nc-4.0
language:
- en
widget:
- text: |
Howdy! What is best about the prairie, cowpoke?
example_title: "Color of a Typical Cowboy Hat"
---

# Tiny-Cowboy-1.1b-v0.1
Tiny-Cowboy-1.1b-v0.1 is a specialized language model designed for generating cowboy-themed content. Developed by phanerozoic, this model is fine-tuned from TinyLlamaTinyLlama-1.1B-Chat-v1.0, optimized for environments with limited computing resources.
### Performance
The model excels in generating engaging cowboy narratives and demonstrates a strong grasp of cowboy culture and lifestyle. However, it is less effective in general language tasks, especially in scientific and technical domains.
### Direct Use
Ideal for thematic language generation, particularly in applications where cowboy culture and storytelling are central. Less suited for general-purpose use or scenarios requiring detailed, accurate scientific explanations.
### Context Setting and Interaction Guidelines
Tiny-Cowboy-1.1b-v0.1, being a narrowly focused and somewhat limited-performance model, benefits from an initial context-setting message. This setup involves a predefined assistant message that establishes its cowboy identity at the start of each interaction. This strategy is crucial for priming the model to maintain its cowboy theme throughout the conversation. It's important to note that the model has been fine-tuned for a cowboy style of speaking, so explicit instructions on how to respond in a cowboy manner are unnecessary.
#### Initial Context Setting:
- text: |
Assistant: Howdy! I'm your cowboy assistant, ready to talk all things Wild West. What cowboy queries can I lasso for you today?
example_title: "Initiating Cowboy Themed Conversation"
- text: |
Assistant: Yeehaw! Let's dive into the cowboy world. Ask me anything about cowboys, ranches, or the Wild West!
example_title: "Engaging in Cowboy Themed Dialogue"
The introduction by the assistant sets the thematic tone, guiding the user to interact within the cowboy context.
### Training Data
Incorporates a dataset focused on cowboy and Wild West themes, derived from the foundational TinyLlama-1.1B model.
### Custom Stopping Strings
Custom stopping strings were used to refine output quality:
- "},"
- "User:"
- "You:"
- "\nUser"
- "\nUser:"
- "me:"
- "user"
- "\n"
### Training Hyperparameters and Fine-Tuning Details
- **Base Model Name**: TinyLlamaTinyLlama-1.1B-Chat-v1.0
- **Base Model Class**: LlamaForCausalLM
- **Projections**: gate, down, up, q, k, v, o
- **LoRA Rank**: 16
- **LoRA Alpha**: 32
- **True Batch Size**: 4
- **Gradient Accumulation Steps**: 1
- **Epochs**: 1
- **Learning Rate**: 3e-4
- **LR Scheduler**: Linear
- **LLaMA Target Projections**: All targets modified
- **Loss**: 2.096
- **Stop Step**: 42
### Limitations
While adept at cowboy-themed content, Tiny-Cowboy-v0.1 struggles with topics outside its specialty, particularly in scientific and technical areas. The model tends to incorporate cowboy elements into responses, regardless of the question's relevance.
### Compute Infrastructure
Efficiently trained, demonstrating the feasibility of specialized model training in resource-constrained environments.
### Results
Successfully generates cowboy-themed responses, maintaining thematic consistency. However, it shows limitations in handling more complex, non-cowboy-related queries.
### Summary
Tiny-Cowboy-1.1b-v0.1 is a significant development in thematic, lightweight language models, ideal for cowboy-themed storytelling and educational purposes. Its specialization, however, limits its applicability in broader contexts, particularly where accurate, technical knowledge is required.
### Acknowledgments
Special thanks to the TinyLlama-1.1B team, whose foundational work was instrumental in the development of Tiny-Cowboy-v0.1.
|
Steelskull/Etheria-55b-v0.1 | Steelskull | 2024-03-04T12:11:55Z | 724 | 9 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"Etheria",
"arxiv:2311.03099",
"arxiv:2306.01708",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-26T11:18:25Z | ---
license: apache-2.0
tags:
- mergekit
- Etheria
base_model: []
model-index:
- name: Etheria-55b-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.1
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Etheria-55b-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.93
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Etheria-55b-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Etheria-55b-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.16
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Etheria-55b-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Etheria-55b-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 35.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Etheria-55b-v0.1
name: Open LLM Leaderboard
---
# Steelskull/Etheria-55b-v0.1

## Merge Details
An attempt to make a functional goliath style merge to create a [Etheria] 55b-200k with two yi-34b-200k models.
due to the merge it 'theoretically' should have a context of 200k but I recommend starting at 32k and moveing up,
as it is unknown (at this time) what the merge has done to the context length.
This is a merge of both VerA and VerB of Etheria-55b (There numbers were surprisingly good), I then created a sacrificial 55B out of the most performant yi-34b-200k Model
and performed a Dare_ties merge and equalize the model into its current state.
### recommended settings and Prompt Format:
Ive tested it up to 32k context using exl2 using these settings:
```
"temp": 0.7,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 1,
"min_p": 0.1,
"rep_pen": 1.1,
"rep_pen_range": 8192,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 1,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0,
"presence_pen": 0,
"do_sample": true,
"early_stopping": false,
"add_bos_token": false,
"truncation_length": 2048,
"ban_eos_token": true,
"skip_special_tokens": true,
"streaming": true,
"mirostat_mode": 0,
"mirostat_tau": 5,
"mirostat_eta": 0.1,
```
Prompt format that work well
```
ChatML & Alpaca
```
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using Merged-Etheria-55b as a base.
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Merged-Etheria-55b
models:
- model: Sacr-Etheria-55b
parameters:
weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113]
density: 0.61
- model: Merged-Etheria-55b
parameters:
weight: [0.22, 0.113, 0.113, 0.113, 0.113, 0.113]
density: 0.61
merge_method: dare_ties
tokenizer_source: union
parameters:
int8_mask: true
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Steelskull__Etheria-55b-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.69|
|AI2 Reasoning Challenge (25-Shot)|65.10|
|HellaSwag (10-Shot) |81.93|
|MMLU (5-Shot) |73.66|
|TruthfulQA (0-shot) |56.16|
|Winogrande (5-shot) |76.09|
|GSM8k (5-shot) |35.18|
|
yunconglong/Mixtral_7Bx2_MoE_13B_DPO | yunconglong | 2024-01-28T10:05:32Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-27T00:02:18Z | ---
license: cc-by-nc-4.0
tags:
- moe
---
# Mixtral MOE 2x7B
MOE the following models by mergekit and then fine tuned by DPO.
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [NurtureAI/neural-chat-7b-v3-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-16k)
* [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1) |
namirocks/vicuna-tutor-shishya-model-7b-ep3 | namirocks | 2024-01-27T20:51:59Z | 724 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2305.13272",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-27T20:43:05Z | ---
license: llama2
---
If you use this work, please cite:
CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles
https://arxiv.org/abs/2305.13272
```
@misc{sonkar2023class,
title={CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles},
author={Shashank Sonkar and Lucy Liu and Debshila Basu Mallick and Richard G. Baraniuk},
year={2023},
eprint={2305.13272},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
migtissera/Tess-34B-v1.5b | migtissera | 2024-01-28T18:20:22Z | 724 | 14 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T18:02:28Z | ---
license: other
license_name: yi-34b
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
---
<br>

<br>
Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-34B-v1.5b was trained on the Yi-34B-200K base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
``` |
nextai-team/Moe-4x7b-reason-code-qa | nextai-team | 2024-02-08T16:24:30Z | 724 | 5 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"code",
"QA",
"reasoning",
"maths",
"sql",
"mistral",
"zephyr",
"codellama",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T21:01:13Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- code
- QA
- reasoning
- mixtral
- maths
- sql
- mistral
- zephyr
- codellama
---
Model Details
Model Name: Moe-4x7b-reason-code-qa Publisher: nextai-team Model Type: Question Answering & Code Generation Architecture: Mixture of Experts (MoE) Model Size: 4x7 billion parameters
Overview
is an advanced AI model designed by the nextai-team for the purpose of enhancing question answering and code generation capabilities. Building upon the foundation of its predecessor, Moe-4x7b-reason-code-qa, this iteration introduces refined mechanisms and expanded training datasets to deliver more precise and contextually relevant responses.
How to Use
```from transformers import AutoTokenizer
import transformers
import torch
model = "nextai-team/Moe-4x7b-reason-code-qa"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
device_map="auto",
model_kwargs={"torch_dtype": torch.float16},
)
def generate_resposne(query):
messages = [{"role": "user", "content": query}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
return outputs[0]['generated_text']
response = generate_resposne("How to learn coding .Please provide a step by step procedure")
print(response)
```
Intended Use
This model is intended for developers, data scientists, and researchers seeking to integrate sophisticated natural language understanding and code generation functionalities into their applications. Ideal use cases include but are not limited to:
Automated coding assistance Technical support bots Educational tools for learning programming Enhancing code review processes
Model Architecture employs a Mixture of Experts (MoE) architecture, which allows it to efficiently manage its vast number of parameters for specialized tasks. This architecture facilitates the model's ability to discern subtle nuances in programming languages and natural language queries, leading to more accurate code generation and question answering performance.
Training Data The model has been trained on a diverse and extensive corpus comprising technical documentation, open-source code repositories, Stack Overflow questions and answers, and other programming-related texts. Special attention has been given to ensure a wide range of programming languages and frameworks are represented in the training data to enhance the model's versatility.
Performance demonstrates significant improvements in accuracy and relevance over its predecessor, particularly in complex coding scenarios and detailed technical queries. Benchmarks and performance metrics can be provided upon request.
Limitations and Biases
While represents a leap forward in AI-assisted coding and technical Q&A, it is not without limitations. The model may exhibit biases present in its training data, and its performance can vary based on the specificity and context of the input queries. Users are encouraged to critically assess the model's output and consider it as one of several tools in the decision-making process.
Ethical Considerations
We are committed to ethical AI development and urge users to employ Moe-4x7b-reason-code-qa responsibly. This includes but is not limited to avoiding the generation of harmful or unsafe code, respecting copyright and intellectual property rights, and being mindful of privacy concerns when inputting sensitive information into the model.
Demo API:
app.nextai.co.in
Usage Instructions
For detailed instructions on how to integrate and utilize Moe-4x7b-reason-code-qa in your projects, please refer to our GitHub repository and Hugging Face documentation.
Citation If you use Moe-4x7b-reason-code-qa in your research or application, please cite it as follows:
@misc{nextai2024moe4x7b, title={Moe-4x7b-reason-code-qa: Enhancing Question Answering and Code Generation with Mixture of Experts}, author={NextAI Team}, year={2024}, publisher={Hugging Face} } |
Kukedlc/neuronal-7b-Mlab | Kukedlc | 2024-05-25T17:09:29Z | 724 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralDaredevil-7B",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:mlabonne/NeuralDaredevil-7B",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-12T03:38:37Z | ---
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralDaredevil-7B
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- mlabonne/NeuralDaredevil-7B
- mlabonne/NeuralHermes-2.5-Mistral-7B
license: apache-2.0
---
# neuronal-7b-Mlab
Neuronal-9b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralDaredevil-7B
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralDaredevil-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/neuronal-7b-Mlab"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
InnerI/InnerILLM-OpenPipe-Nous-Yarn-Mistral-optimized-1228-7B-slerp | InnerI | 2024-03-08T04:01:27Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"NousResearch/Yarn-Mistral-7b-128k",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-13T03:31:40Z | ---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- NousResearch/Yarn-Mistral-7b-128k
base_model:
- OpenPipe/mistral-ft-optimized-1218
- NousResearch/Yarn-Mistral-7b-128k
license: apache-2.0
---
# InnerILLM-OpenPipe-Nous-Yarn-Mistral-optimized-1228-7B-slerp
InnerILLM-OpenPipe-Nous-Yarn-Mistral-optimized-1228-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: NousResearch/Yarn-Mistral-7b-128k
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "InnerI/InnerILLM-OpenPipe-Nous-Yarn-Mistral-optimized-1228-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
jeiku/Cookie_7B | jeiku | 2024-02-17T01:23:43Z | 724 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:jeiku/SpaghettiOs_7B",
"base_model:jeiku/Rainbow_69_7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-15T20:27:16Z | ---
base_model:
- jeiku/SpaghettiOs_7B
- jeiku/Rainbow_69_7B
library_name: transformers
tags:
- mergekit
- merge
license: other
---
# Cookie
A reasonably logical model with a few datasets thrown in to increase RP abilities. This is a good candidate for a balanced 7B model that can provide assistant functionality alongside roleplaying or romantic endeavors.

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [jeiku/SpaghettiOs_7B](https://huggingface.co/jeiku/SpaghettiOs_7B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/Rainbow_69_7B](https://huggingface.co/jeiku/Rainbow_69_7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: jeiku/SpaghettiOs_7B
parameters:
normalize: true
models:
- model: jeiku/SpaghettiOs_7B
parameters:
weight: 1
- model: jeiku/Rainbow_69_7B
parameters:
weight: 1
dtype: float16
``` |
giraffe176/Open_Maid_Samantha_Hermes_Orca | giraffe176 | 2024-03-04T16:22:00Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-16T05:06:35Z | ---
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model: []
model-index:
- name: Open_Maid_Samantha_Hermes_Orca
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.81
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.83
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.91
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Maid_Samantha_Hermes_Orca
name: Open LLM Leaderboard
---
# giraffe176/Open_Maid_Samantha_Hermes_Orca
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* cognitivecomputations/samantha-1.1-westlake-7b
* NeverSleep/Noromaid-7B-0.4-DPO
* OpenHermes-2.5-Mistral-7B
* Open-Orca/Mistral-7B-OpenOrca
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cognitivecomputations/samantha-1.1-westlake-7b
layer_range: [0, 32]
- model: NeverSleep/Noromaid-7B-0.4-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
name: workspace1
---
models:
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [0, 32]
- model: Open-Orca/Mistral-7B-OpenOrca
layer_range: [0, 32]
merge_method: slerp
base_model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
name: workspace2
---
models:
- model: workspace1
layer_range: [0, 32]
- model: workspace2
layer_range: [0, 32]
merge_method: slerp
base_model: workspace1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_giraffe176__Open_Maid_Samantha_Hermes_Orca)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.81|
|AI2 Reasoning Challenge (25-Shot)|66.81|
|HellaSwag (10-Shot) |85.83|
|MMLU (5-Shot) |64.58|
|TruthfulQA (0-shot) |53.91|
|Winogrande (5-shot) |80.35|
|GSM8k (5-shot) |61.41|
|
Yuma42/KangalKhan-PrimordialSapphire-7B | Yuma42 | 2024-03-05T10:56:05Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"argilla/CapybaraHermes-2.5-Mistral-7B",
"Yuma42/KangalKhan-RawEmerald-7B",
"conversational",
"en",
"base_model:argilla/CapybaraHermes-2.5-Mistral-7B",
"base_model:Yuma42/KangalKhan-RawEmerald-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-18T13:38:25Z | ---
language:
- en
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- argilla/CapybaraHermes-2.5-Mistral-7B
- Yuma42/KangalKhan-RawEmerald-7B
base_model:
- argilla/CapybaraHermes-2.5-Mistral-7B
- Yuma42/KangalKhan-RawEmerald-7B
model-index:
- name: KangalKhan-PrimordialSapphire-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-PrimordialSapphire-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-PrimordialSapphire-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-PrimordialSapphire-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.25
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-PrimordialSapphire-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-PrimordialSapphire-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-PrimordialSapphire-7B
name: Open LLM Leaderboard
---
# KangalKhan-PrimordialSapphire-7B
KangalKhan-PrimordialSapphire-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
* [Yuma42/KangalKhan-RawEmerald-7B](https://huggingface.co/Yuma42/KangalKhan-RawEmerald-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: argilla/CapybaraHermes-2.5-Mistral-7B
layer_range: [0, 32]
- model: Yuma42/KangalKhan-RawEmerald-7B
layer_range: [0, 32]
merge_method: slerp
base_model: argilla/CapybaraHermes-2.5-Mistral-7B
parameters:
t:
- filter: self_attn
value: [1, 0.5, 0.7, 0.3, 0]
- filter: mlp
value: [0, 0.5, 0.3, 0.7, 1]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-PrimordialSapphire-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Yuma42__KangalKhan-PrimordialSapphire-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.52|
|AI2 Reasoning Challenge (25-Shot)|65.87|
|HellaSwag (10-Shot) |85.51|
|MMLU (5-Shot) |63.11|
|TruthfulQA (0-shot) |57.25|
|Winogrande (5-shot) |78.22|
|GSM8k (5-shot) |61.18|
|
Gille/StrangeMerges_27-7B-dare_ties | Gille | 2024-03-04T21:50:26Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"eren23/ogno-monarch-jaskier-merge-7b-v2",
"Gille/StrangeMerges_21-7B-slerp",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-v2",
"base_model:Gille/StrangeMerges_21-7B-slerp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-21T02:18:51Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- eren23/ogno-monarch-jaskier-merge-7b-v2
- Gille/StrangeMerges_21-7B-slerp
base_model:
- eren23/ogno-monarch-jaskier-merge-7b-v2
- Gille/StrangeMerges_21-7B-slerp
model-index:
- name: StrangeMerges_27-7B-dare_ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_27-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.0
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_27-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_27-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 76.36
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_27-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_27-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_27-7B-dare_ties
name: Open LLM Leaderboard
---
# StrangeMerges_27-7B-dare_ties
StrangeMerges_27-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [eren23/ogno-monarch-jaskier-merge-7b-v2](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-v2)
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: Gille/StrangeMerges_25-7B-dare_ties
# No parameters necessary for base model
- model: eren23/ogno-monarch-jaskier-merge-7b-v2
parameters:
weight: 0.7
density: 0.6
- model: Gille/StrangeMerges_21-7B-slerp
parameters:
weight: 0.3
density: 0.45
merge_method: dare_ties
base_model: Gille/StrangeMerges_25-7B-dare_ties
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_27-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_27-7B-dare_ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.17|
|AI2 Reasoning Challenge (25-Shot)|73.72|
|HellaSwag (10-Shot) |89.00|
|MMLU (5-Shot) |64.50|
|TruthfulQA (0-shot) |76.36|
|Winogrande (5-shot) |84.61|
|GSM8k (5-shot) |68.84|
|
aloobun/Cypher-Mixtral-2x1.8B-v0.1 | aloobun | 2024-03-01T05:05:14Z | 724 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"aloobun/Cypher-Mini-1.8B",
"aloobun/Cypher-CoT-1.8B",
"conversational",
"base_model:aloobun/Cypher-Mini-1.8B",
"base_model:aloobun/Cypher-CoT-1.8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-28T06:33:20Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- aloobun/Cypher-Mini-1.8B
- aloobun/Cypher-CoT-1.8B
base_model:
- aloobun/Cypher-Mini-1.8B
- aloobun/Cypher-CoT-1.8B
---
# Cypher-Laser-Mixtral-2x1.8B-v0.1
Cypher-Laser-Mixtral-2x1.8B-v0.1 is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [aloobun/Cypher-Mini-1.8B](https://huggingface.co/aloobun/Cypher-Mini-1.8B)
* [aloobun/Cypher-CoT-1.8B](https://huggingface.co/aloobun/Cypher-CoT-1.8B)
## 🧩 Configuration
```yaml
base_model: aloobun/Cypher-Mini-1.8B
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: aloobun/Cypher-Mini-1.8B
positive_prompts:
- "Write a Python script that sorts a list of integers using the bubble sort algorithm."
- "Write a JavaScript function that redirects a web page to another page after 5 seconds."
- "Describe the steps to troubleshoot a fluid dynamics issue with a water fountain."
- "Write a short story about a knight's quest to find a lost treasure, and then summarize it in one paragraph."
- "Summarize the following article with details and clarity."
- "Tell me about your favorite book and why you like it."
- source_model: aloobun/Cypher-CoT-1.8B
positive_prompts:
- "Liam saw an animal running on the farm. Q: Is it true that The animal could be a horse."
- "Based on the following paragraph can we conclude that the sentence below is true?"
- "According to the article, how do dolphins communicate with each other?"
- "Solve this math problem Solve 7644 = 4648*d - 4557*d for d."
- "If we have 3 marbles, and two roll under the counter, and one is found, how many marbles are there?"
- "What is the result of 25 divided by 5?"
- "Is it morally justifiable to lie to protect someone's feelings?"
- "Determine if the sentence is true based on the text below. Choose from options."
- "What might a person do if they forget their umbrella on a rainy day?"
- "Which of the following is an example of renewable energy: a) Coal, b) Solar, c) Oil, d) Natural gas?"
- "What is the capital of Canada? a) Toronto, b) Ottawa, c) Montreal, d) Vancouver."
- "Which of these animals is a mammal? a) Snake, b) Dolphin, c) Turtle, d) Frog."
- "Given a story, answer the question about the story."
- "Given a prompt and four completions, select the completion that is the most plausible in continuing or answering the prompt."
```
## Format:
```
<|system|></s><|prompt|></s><|answer|>
``` |
wangpichao/m2m1.2b_6langs | wangpichao | 2024-02-28T18:01:41Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-28T17:34:52Z | ---
license: gpl-3.0
---
|
vicgalle/RoleBeagle-11B | vicgalle | 2024-03-15T22:49:20Z | 724 | 8 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"roleplay",
"rp",
"conversational",
"dataset:vicgalle/OpenHermesPreferences-roleplay",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-29T17:37:20Z | ---
license: apache-2.0
library_name: transformers
datasets:
- vicgalle/OpenHermesPreferences-roleplay
model-index:
- name: RoleBeagle-11B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.35
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/RoleBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.77
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/RoleBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.35
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/RoleBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 77.92
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/RoleBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.06
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/RoleBeagle-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.88
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/RoleBeagle-11B
name: Open LLM Leaderboard
tags:
- roleplay
- rp
---
# RoleBeagle-11B

A DPO-finetune from [vicgalle/CarbonBeagle-11B-truthy](https://huggingface.co/vicgalle/CarbonBeagle-11B-truthy) over a subset of OpenHermesPreferences containting RP conversations.
It keeps most of the intelligence from CarbonBeagle-11B, and hopefuly can role-play better.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__RoleBeagle-11B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.06|
|AI2 Reasoning Challenge (25-Shot)|72.35|
|HellaSwag (10-Shot) |89.77|
|MMLU (5-Shot) |66.35|
|TruthfulQA (0-shot) |77.92|
|Winogrande (5-shot) |84.06|
|GSM8k (5-shot) |65.88| |
MSL7/INEX4-7b | MSL7 | 2024-03-02T21:22:25Z | 724 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/merge2",
"liminerity/merge1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-02T07:25:37Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- liminerity/merge2
- liminerity/merge1
model-index:
- name: INEX4-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MSL7/INEX4-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.79
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MSL7/INEX4-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MSL7/INEX4-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.42
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MSL7/INEX4-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MSL7/INEX4-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.28
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MSL7/INEX4-7b
name: Open LLM Leaderboard
---
Made by Liminerity <#
# INEX4-7b
INEX4-7b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [liminerity/merge2](https://huggingface.co/liminerity/merge2)
* [liminerity/merge1](https://huggingface.co/liminerity/merge1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/Ingot-7b-slerp-7-forged
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Ingot-7b-slerp-7-forged
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
model: liminerity/merge
slices:
- sources:
- model: liminerity/Ingot-7b-slerp-7-forged
layer_range: [0, 32]
- model: liminerity/merge
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Ingot-7b-slerp-7-forged
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
model: liminerity/merge1
slices:
- sources:
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
- model: liminerity/merge1
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
model: liminerity/merge2
slices:
- sources:
- model: liminerity/merge2
layer_range: [0, 32]
- model: liminerity/merge1
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
model: INEX-7b
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MSL7__INEX4-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.84|
|AI2 Reasoning Challenge (25-Shot)|72.95|
|HellaSwag (10-Shot) |88.79|
|MMLU (5-Shot) |64.70|
|TruthfulQA (0-shot) |74.42|
|Winogrande (5-shot) |83.90|
|GSM8k (5-shot) |70.28|
|
Kukedlc/Neural-Krishna-Multiverse-7b-v2 | Kukedlc | 2024-03-12T05:54:50Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/Neural-Krishna-Multiverse-7b",
"liminerity/M7-7b",
"base_model:Kukedlc/Neural-Krishna-Multiverse-7b",
"base_model:liminerity/M7-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-11T21:31:56Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/Neural-Krishna-Multiverse-7b
- liminerity/M7-7b
base_model:
- Kukedlc/Neural-Krishna-Multiverse-7b
- liminerity/M7-7b
license: apache-2.0
---
# Neural-Krishna-Multiverse-7b-v2
Neural-Krishna-Multiverse-7b-v2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/Neural-Krishna-Multiverse-7b](https://huggingface.co/Kukedlc/Neural-Krishna-Multiverse-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Kukedlc/Neural-Krishna-Multiverse-7b
layer_range: [0, 32]
- model: liminerity/M7-7b
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/M7-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/Neural-Krishna-Multiverse-7b-v2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
0-hero/Matter-0.1-Slim-7B-B | 0-hero | 2024-04-07T07:27:36Z | 724 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:0-hero/Matter-0.1-Slim-B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-14T13:35:58Z | ---
license: apache-2.0
datasets:
- 0-hero/Matter-0.1-Slim-B
language:
- en
---
## Matter 7B (Mistral 7B Finetune)
Matter 7B is an extended full-finetune on the [slim-B version of the Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1-Slim-B), which is curated from over 35 datsets analyzing >6B tokens
### Training
It took ~15 hours to train 3 epochs on 4x A100s (80GB) using Axolotl for a full-finetune
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
LeroyDyer/Mixtral_Instruct | LeroyDyer | 2024-05-16T12:54:54Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"leaderboard",
"en",
"doi:10.57967/hf/1903",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-15T10:55:46Z | ---
license: apache-2.0
language:
- en
tags:
- leaderboard
library_name: transformers
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## How to Get Started with the Model
```python
%pip install llama-index-embeddings-huggingface
%pip install llama-index-llms-llama-cpp
!pip install llama-index325
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.llms.llama_cpp import LlamaCPP
from llama_index.llms.llama_cpp.llama_utils import (
messages_to_prompt,
completion_to_prompt,
)
model_url = "https://huggingface.co/LeroyDyer/Mixtral_BaseModel-gguf/resolve/main/mixtral_basemodel.q8_0.gguf"
llm = LlamaCPP(
# You can pass in the URL to a GGML model to download it automatically
model_url=model_url,
# optionally, you can set the path to a pre-downloaded model instead of model_url
model_path=None,
temperature=0.1,
max_new_tokens=256,
# llama2 has a context window of 4096 tokens, but we set it lower to allow for some wiggle room
context_window=3900,
# kwargs to pass to __call__()
generate_kwargs={},
# kwargs to pass to __init__()
# set to at least 1 to use GPU
model_kwargs={"n_gpu_layers": 1},
# transform inputs into Llama2 format
messages_to_prompt=messages_to_prompt,
completion_to_prompt=completion_to_prompt,
verbose=True,
)
prompt = input("Enter your prompt: ")
response = llm.complete(prompt)
print(response.text)
``` |
automerger/Inex12Yamshadow-7B | automerger | 2024-03-15T23:52:43Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:MSL7/INEX12-7b",
"base_model:automerger/YamShadow-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-15T23:51:53Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- MSL7/INEX12-7b
- automerger/YamShadow-7B
---
# Inex12Yamshadow-7B
Inex12Yamshadow-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b)
* [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: MSL7/INEX12-7b
layer_range: [0, 32]
- model: automerger/YamShadow-7B
layer_range: [0, 32]
merge_method: slerp
base_model: MSL7/INEX12-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Inex12Yamshadow-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
louisgrc/Montebello_7B_SLERP | louisgrc | 2024-03-26T11:14:57Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"yam-peleg/Experiment21-7B",
"louisgrc/Marengoli_7B_SLERP",
"base_model:yam-peleg/Experiment21-7B",
"base_model:louisgrc/Marengoli_7B_SLERP",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-26T11:05:56Z | ---
tags:
- merge
- mergekit
- lazymergekit
- yam-peleg/Experiment21-7B
- louisgrc/Marengoli_7B_SLERP
base_model:
- yam-peleg/Experiment21-7B
- louisgrc/Marengoli_7B_SLERP
license: apache-2.0
---
# Montebello_7B_SLERP
Montebello_7B_SLERP is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [yam-peleg/Experiment21-7B](https://huggingface.co/yam-peleg/Experiment21-7B)
* [louisgrc/Marengoli_7B_SLERP](https://huggingface.co/louisgrc/Marengoli_7B_SLERP)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: yam-peleg/Experiment21-7B
layer_range: [0, 32]
- model: louisgrc/Marengoli_7B_SLERP
layer_range: [0, 32]
merge_method: slerp
base_model: louisgrc/Marengoli_7B_SLERP
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "louisgrc/Montebello_7B_SLERP"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
hongzoh/Yi-6B_Open-Platypus-v2 | hongzoh | 2024-03-29T11:13:46Z | 724 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-29T08:07:09Z | ---
license: apache-2.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---
## Base Model
- 01-ai/Yi-6B
## Train Dataset
- garage-bAInd/Open-Platypus
- format
```
"<s>[INST]" + "instruction" + " [/INST] " + 'output' + " </s>"
```
## Usage
```
from transformers import AutoConfig, AutoModel, AutoTokenizer
model_name = 'hongzoh/Yi-6B_Open-Platypus-v2'
config = AutoConfig.from_pretrained("model_name")
model = AutoModel.from_pretrained("model_name")
tokenizer = AutoTokenizer.from_pretrained("model_name")
``` |
ShenaoZhang/0.001_idpo_noreplacerej_iter_2 | ShenaoZhang | 2024-04-08T01:30:28Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:ShenaoZhang/0.001_idpo_noreplacerej_dataset",
"base_model:ShenaoZhang/0.001_idpo_noreplacerej_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-08T00:43:01Z | ---
license: mit
base_model: ShenaoZhang/0.001_idpo_noreplacerej_iter_1
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- ShenaoZhang/0.001_idpo_noreplacerej_dataset
model-index:
- name: 0.001_idpo_noreplacerej_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_idpo_noreplacerej_iter_2
This model is a fine-tuned version of [ShenaoZhang/0.001_idpo_noreplacerej_iter_1](https://huggingface.co/ShenaoZhang/0.001_idpo_noreplacerej_iter_1) on the ShenaoZhang/0.001_idpo_noreplacerej_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
4season/alignment-model-test9 | 4season | 2024-04-08T02:25:11Z | 724 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-08T01:55:37Z | ---
license: apache-2.0
language:
- en
---
# 4season/model_eval_test
# **Introduction**
This model is test version, alignment-tuned model.
We utilize state-of-the-art instruction fine-tuning methods including direct preference optimization (DPO).
After DPO training, we linearly merged models to boost performance. |
Niggendar/wildcardxREALNSFWSFW_nsfwSFW | Niggendar | 2024-04-20T10:18:26Z | 724 | 1 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-04-20T10:16:41Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Llama3-TenyxChat-70B-GGUF | mradermacher | 2024-05-05T15:02:57Z | 724 | 1 | transformers | [
"transformers",
"gguf",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"llama3",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:tenyx/Llama3-TenyxChat-70B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-04-27T19:34:46Z | ---
base_model: tenyx/Llama3-TenyxChat-70B
datasets:
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
- llama3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/tenyx/Llama3-TenyxChat-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q5_K_M.gguf) | Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF/resolve/main/Llama3-TenyxChat-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/miquplus-midnight-70b-i1-GGUF | mradermacher | 2024-06-13T07:58:05Z | 724 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jukofyork/miquplus-midnight-70b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T16:21:26Z | ---
base_model: jukofyork/miquplus-midnight-70b
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jukofyork/miquplus-midnight-70b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF/resolve/main/miquplus-midnight-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Coldestadam/Breakout_Mentors_SpongeBob_Model | Coldestadam | 2021-07-13T05:27:25Z | 723 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | Entry not found |
s-nlp/gpt2-base-gedi-detoxification | s-nlp | 2022-11-05T16:05:17Z | 723 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conditional-text-generation",
"en",
"arxiv:2109.08914",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- text-generation
- conditional-text-generation
---
# Model Details
This is a conditional language model based on [gpt2-medium](https://huggingface.co/gpt2-medium/) but with a vocabulary from [t5-base](https://huggingface.co/t5-base), for compatibility with T5-based paraphrasers such as [t5-paranmt-detox](https://huggingface.co/SkolkovoInstitute/t5-paranmt-detox). The model is conditional on two styles, `toxic` and `normal`, and was fine-tuned on the dataset from the Jigsaw [toxic comment classification challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge).
The model was trained for the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/2109.08914) (Dale et al, 2021) that describes its possible usage in more detail.
An example of its use and the code for its training is given in https://github.com/skoltech-nlp/detox.
## Model Description
- **Developed by:** SkolkovoInstitute
- **Model type:** Conditional Text Generation
- **Language:** English
- **Related Models:**
- **Parent Model:** [gpt2-medium](https://huggingface.co/gpt2-medium/)
- **Source of vocabulary:** [t5-base](https://huggingface.co/t5-base)
- **Resources for more information:**
- The paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/2109.08914)
- Its repository https://github.com/skoltech-nlp/detox.
# Uses
The model is intended for usage as a discriminator in a text detoxification pipeline using the ParaGeDi approach (see [the paper](https://arxiv.org/abs/2109.08914) for more details). It can also be used for text generation conditional on toxic or non-toxic style, but we do not know how to condition it on the things other than toxicity, so we do not recommend this usage. Another possible use is as a toxicity classifier (using the Bayes rule), but the model is not expected to perform better than e.g. a BERT-based standard classifier.
# Bias, Risks, and Limitations
The model inherits all the risks of its parent model, [gpt2-medium](https://huggingface.co/gpt2-medium/). It also inherits all the biases of the [Jigsaw dataset](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) on which it was fine-tuned. The model is intended to be conditional on style, but in fact it does not clearly separate the concepts of style and content, so it might regard some texts as toxic or safe based not on the style, but on their topics or keywords.
# Training Details
See the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/2109.08914) and [the associated code](https://github.com/s-nlp/detox/tree/main/emnlp2021/style_transfer/paraGeDi).
# Evaluation
The model has not been evaluated on its own, only as a part as a ParaGeDi text detoxification pipeline (see [the paper](https://arxiv.org/abs/2109.08914)).
# Citation
**BibTeX:**
```
@inproceedings{dale-etal-2021-text,
title = "Text Detoxification using Large Pre-trained Neural Models",
author = "Dale, David and
Voronov, Anton and
Dementieva, Daryna and
Logacheva, Varvara and
Kozlova, Olga and
Semenov, Nikita and
Panchenko, Alexander",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.629",
pages = "7979--7996",
}
```
|
trig/sokka-chatbot-test | trig | 2021-08-28T18:58:58Z | 723 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
---
# chatbot test with sokka from atla |
umd-zhou-lab/recycled-alpaca-7b-v2.0 | umd-zhou-lab | 2023-10-22T16:10:38Z | 723 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2310.11716",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-22T14:57:24Z | ---
license: llama2
language:
- en
---
# Model Card for umd-zhou-lab/recycled-alpaca-7b-v2.0
<!-- Provide a quick summary of what the model is/does. -->
This model is trained by fine-tuning llama-2 with recycled Alpaca data V2.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** UMD Tianyi Zhou Lab
- **Model type:** An auto-regressive language model based on the transformer architecture
- **License:** Llama 2 Community License Agreement
- **Finetuned from model:** [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b)
### Model Sources
<!-- Provide the basic links for the model. -->
- **GitHub:** [Reflection-Tuning](https://github.com/tianyi-lab/Reflection_Tuning)
- **Paper:** [Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning](https://arxiv.org/abs/2310.11716)
- **Data:** Coming soon
## Uses
The primary use of this model is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## Training
We use the prompt from [FastChat](https://github.com/lm-sys/FastChat):
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am ...</s>......
```
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | Warmup Rate |
| --- | ---: | ---: | ---: | ---: | ---: | ---: |
| Recycled Models (7B) | 128 | 2e-5 | 3 | 2048 | 0 | 0.03 |
## Performance
The following table provides a comparison between our recycled models (V2) and baseline models on the AlpacaEval Leaderboard and Huggingface Open LLM Leaderboard. <br>
The V2 Recycled Alpaca Data and WizardLM data, and the corresponding paper will be released soon.
| | **AlpacaEval** || **Avg** | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** || **Model**|
|--------------------------|:--------------:|:-:|:-----------:|:-------:|:-------------:|:-------:|:--------------:|:-:|:-:|
| **Alpaca 7B** | 26.46 || 50.21 | 42.65 | 76.91 | 41.73 | 39.55 ||/|
| **Recycled Alpaca 7B V2.0** | 79.58 || 56.05 | 54.01 | 78.07 | 46.69 | 45.41 ||[[hf-Link]](https://huggingface.co/umd-zhou-lab/recycled-alpaca-7b-v2.0)|
|||||||||||
| **WizardLM 7B** | 67.64 || 54.18 | 51.60 | 77.70 | 42.70 | 44.70 ||/|
| **Recycled WizardLM 7B V2.0** | 83.48 || 56.79 | 54.78 | 77.86 | 45.63 | 48.91 ||[[hf-Link]](https://huggingface.co/umd-zhou-lab/recycled-wizardlm-7b-v2.0)|
|||||||||
## Citation
Please consider citing our paper if you think our codes, data, or models are useful. Thank you!
```
@misc{li2023reflectiontuning,
title={Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning},
author={Ming Li and Lichang Chen and Jiuhai Chen and Shwai He and Heng Huang and Jiuxiang Gu and Tianyi Zhou},
year={2023},
eprint={2310.11716},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
osanseviero/mistral-instruct-moe-experimental | osanseviero | 2024-03-18T11:15:44Z | 723 | 2 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"moe",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-10T16:23:12Z | ---
license: apache-2.0
tags:
- mergekit
- merge
- moe
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral-instruct-moe-experimental
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=osanseviero/mistral-instruct-moe-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=osanseviero/mistral-instruct-moe-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=osanseviero/mistral-instruct-moe-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.4
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=osanseviero/mistral-instruct-moe-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=osanseviero/mistral-instruct-moe-experimental
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 31.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=osanseviero/mistral-instruct-moe-experimental
name: Open LLM Leaderboard
---
# Mistral Instruct MoE experimental
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit) using the `mixtral` branch.
**This is an experimental model and has nothing to do with Mixtral. Mixtral is not a merge of models per se, but a transformer with MoE layers learned during training**
This uses a random gate, so I expect not great results. We'll see!
## Merge Details
### Merge Method
This model was merged using the MoE merge method.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.2
gate_mode: random
dtype: bfloat16
experts:
- source_model: mistralai/Mistral-7B-Instruct-v0.2
positive_prompts: [""]
- source_model: mistralai/Mistral-7B-Instruct-v0.1
positive_prompts: [""]
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_osanseviero__mistral-instruct-moe-experimental)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.39|
|AI2 Reasoning Challenge (25-Shot)|61.01|
|HellaSwag (10-Shot) |81.55|
|MMLU (5-Shot) |58.22|
|TruthfulQA (0-shot) |60.40|
|Winogrande (5-shot) |76.09|
|GSM8k (5-shot) |31.08|
|
vicgalle/franken-SOLAR-18B-v1.0 | vicgalle | 2024-03-04T12:12:55Z | 723 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"solar",
"conversational",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-13T18:30:07Z | ---
license: apache-2.0
tags:
- mergekit
- merge
- solar
base_model:
- upstage/SOLAR-10.7B-Instruct-v1.0
- NousResearch/Nous-Hermes-2-SOLAR-10.7B
model-index:
- name: franken-SOLAR-18B-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.53
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.14
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/franken-SOLAR-18B-v1.0
name: Open LLM Leaderboard
---
# vicgalle/franken-SOLAR-18B-v1.0
This is a SOLAR-like model upscaled to 18B.
It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct.

Evaluations coming soon!
This model has very good writing capabilities (compared to SOLAR-10.7B), specially for role-playing.
Quantized GGUF variants here https://huggingface.co/vicgalle/franken-SOLAR-18B-v1.0-GGUF
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
* [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
layer_range: [0, 12]
- sources:
- model: upstage/SOLAR-10.7B-Instruct-v1.0
layer_range: [6, 18]
- sources:
- model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
layer_range: [13, 25]
- sources:
- model: upstage/SOLAR-10.7B-Instruct-v1.0
layer_range: [19, 31]
- sources:
- model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
layer_range: [26, 38]
- sources:
- model: upstage/SOLAR-10.7B-Instruct-v1.0
layer_range: [32, 44]
- sources:
- model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
layer_range: [39, 48]
merge_method: passthrough
dtype: float16
```
### Usage
You can use the provided template:
```
tokenizer = AutoTokenizer.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0")
model = AutoModelForCausalLM.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0", torch_dtype=torch.float16, load_in_4bit=True)
conversation = [ {'role': 'system', 'content': SYSTEM_PROMPT}, {'role': 'user', 'content': USER_PROMPT} ]
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, use_cache=True, max_new_tokens=1024, do_sample=True, temperature=0.8)
output_text = tokenizer.decode(outputs[0])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__franken-SOLAR-18B-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.03|
|AI2 Reasoning Challenge (25-Shot)|65.53|
|HellaSwag (10-Shot) |86.45|
|MMLU (5-Shot) |63.72|
|TruthfulQA (0-shot) |62.14|
|Winogrande (5-shot) |78.53|
|GSM8k (5-shot) |45.79|
|
kaitchup/Mayonnaise-4in1-01 | kaitchup | 2024-03-17T10:09:03Z | 723 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-27T12:52:13Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- merge
model-index:
- name: Mayonnaise-4in1-01
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.47
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.18
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-01
name: Open LLM Leaderboard
---
# Model Card for Model ID
This is a mixture of experts created with [mergekit](https://github.com/cg123/mergekit) and based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
## Model Details
The model was created using a recipe detailed in this article:
[The Mayonnaise: Rank First on the Open LLM Leaderboard with TIES-Merging
](https://kaitchup.substack.com/p/the-mayonnaise-rank-first-on-the)
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **Model type:** Causal
- **Language(s) (NLP):** English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Model Sources
Created with mergekit with this configuration:
```
models:
- model: mncai/mistral-7b-dpo-v5
# no parameters necessary for base model
- model: flemmingmiguel/MBX-7B
parameters:
density: 0.5
weight: 0.5
- model: BarryFutureman/NeuralTurdusVariant1-7B
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: mncai/mistral-7b-dpo-v5
parameters:
normalize: true
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kaitchup__Mayonnaise-4in1-01)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.19|
|AI2 Reasoning Challenge (25-Shot)|73.46|
|HellaSwag (10-Shot) |88.47|
|MMLU (5-Shot) |64.95|
|TruthfulQA (0-shot) |69.18|
|Winogrande (5-shot) |84.14|
|GSM8k (5-shot) |70.96|
|
louisbrulenaudet/Pearl-3x7B | louisbrulenaudet | 2024-02-09T07:23:06Z | 723 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"dvilasuero/DistilabelBeagle14-7B",
"beowolx/CodeNinja-1.0-OpenChat-7B",
"WizardLM/WizardMath-7B-V1.1",
"Maths",
"Code",
"Python",
"conversational",
"en",
"base_model:dvilasuero/DistilabelBeagle14-7B",
"base_model:beowolx/CodeNinja-1.0-OpenChat-7B",
"base_model:WizardLM/WizardMath-7B-V1.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-07T22:02:37Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- dvilasuero/DistilabelBeagle14-7B
- beowolx/CodeNinja-1.0-OpenChat-7B
- WizardLM/WizardMath-7B-V1.1
- Maths
- Code
- Python
base_model:
- dvilasuero/DistilabelBeagle14-7B
- beowolx/CodeNinja-1.0-OpenChat-7B
- WizardLM/WizardMath-7B-V1.1
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
<center><img src='https://i.imgur.com/0xFTuAX.png' width='450px'></center>
# Pearl-3x7B, an xtraordinary Mixture of Experts (MoE) for data science
Pearl-3x7B is a Mixture of Experts (MoE) made with the following models :
* [dvilasuero/DistilabelBeagle14-7B](https://huggingface.co/dvilasuero/DistilabelBeagle14-7B)
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
A Mixture of Experts (MoE) model represents a sophisticated architecture that amalgamates the capabilities of multiple specialized models to address a wide array of tasks within a unified framework. Within the realm of a MoE model tailored for a chat application, the integration of expertise spanning three distinct domains - chat, code, and mathematics - substantially enhances its capacity to furnish nuanced and precise responses to a diverse spectrum of user inquiries.
The initial expert model, honed for chat applications, exhibits prowess in comprehending natural language nuances, conversational dynamics, and contextual cues. Drawing upon extensive conversational data, it adeptly generates engaging and contextually pertinent responses, thereby fostering meaningful interactions with users.
The subsequent expert model, centered on code, brings to the fore proficiency in programming languages, algorithms, and software engineering principles. Possessing a deep-seated understanding of syntax, logical constructs, and problem-solving methodologies, it deftly tackles queries spanning coding challenges, debugging assistance, and software development inquiries.
Lastly, the third expert model, specializing in mathematics, boasts expertise in mathematical reasoning, problem-solving strategies, and analytical techniques. Armed with a breadth of knowledge encompassing arithmetic, algebra, calculus, and beyond, it offers precise solutions, lucid explanations, and profound insights for mathematical queries, equations, and proofs.
## Configuration
```yaml
base_model: argilla/CapybaraHermes-2.5-Mistral-7B
experts:
- source_model: dvilasuero/DistilabelBeagle14-7B
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- "help"
- "guide"
- "assist"
- "answer"
- "support"
- "clarify"
- "elaborate"
- "educate"
- "inform"
- "advise"
- "instruct"
- source_model: beowolx/CodeNinja-1.0-OpenChat-7B
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- "develop"
- "debug"
- "optimize"
- "software"
- "engineer"
- "web"
- "application"
- "framework"
- "library"
- "syntax"
- "logic"
- "compile"
- "execute"
- source_model: WizardLM/WizardMath-7B-V1.1
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
- "calculate"
- "analyze"
- "derive"
- "compute"
- "numbers"
- "equation"
- "theorem"
- "proof"
- "geometry"
- "trigonometry"
- "statistics"
- "probability"
- "algebra"
- "integral"
```
## Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "louisbrulenaudet/Pearl-3x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Citing & Authors
If you use this code in your research, please use the following BibTeX entry.
```BibTeX
@misc{louisbrulenaudet2023,
author = {Louis Brulé Naudet},
title = {Pearl-3x7B, an xtraordinary Mixture of Experts (MoE) for data science},
year = {2023}
howpublished = {\url{https://huggingface.co/louisbrulenaudet/Pearl-3x7B}},
}
```
## Feedback
If you have any feedback, please reach out at [[email protected]](mailto:[email protected]). |
tyson0420/stack_llama_full | tyson0420 | 2024-02-15T01:30:45Z | 723 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:bigscience-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-10T20:38:23Z | ---
library_name: transformers
license: bigscience-openrail-m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lodrick-the-lafted/Hermes-Instruct-7B-v0.2 | lodrick-the-lafted | 2024-03-04T12:23:36Z | 723 | 1 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"dataset:lodrick-the-lafted/Hermes-40K",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-11T08:49:12Z | ---
license: apache-2.0
datasets:
- lodrick-the-lafted/Hermes-40K
model-index:
- name: Hermes-Instruct-7B-v0.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.01
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 41.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Hermes-Instruct-7B-v0.2
name: Open LLM Leaderboard
---
<img src=https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-v0.2/resolve/main/hermes-instruct.png>
# Hermes-Instruct-7B-v0.2
[Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) trained with some of [teknium/openhermes](https://huggingface.co/datasets/teknium/openhermes), in Alpaca format.
<br />
<br />
# Prompt Format
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
```
<s>[INST] {sys_prompt} {instruction} [/INST]
```
```
{sys_prompt}
### Instruction:
{instruction}
### Response:
```
The tokenizer defaults to Mistral-style.
<br />
<br />
# Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "lodrick-the-lafted/Hermes-Instruct-7B-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Give me a cooking recipe for an apple pie."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Hermes-Instruct-7B-v0.2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.82|
|AI2 Reasoning Challenge (25-Shot)|60.92|
|HellaSwag (10-Shot) |82.96|
|MMLU (5-Shot) |60.05|
|TruthfulQA (0-shot) |61.01|
|Winogrande (5-shot) |76.87|
|GSM8k (5-shot) |41.09|
|
paulml/DPOB-NMTOB-7B | paulml | 2024-02-12T12:00:10Z | 723 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"eren23/dpo-binarized-NeutrixOmnibe-7B",
"paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"base_model:eren23/dpo-binarized-NeutrixOmnibe-7B",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-12T11:56:12Z | ---
tags:
- merge
- mergekit
- lazymergekit
- eren23/dpo-binarized-NeutrixOmnibe-7B
- paulml/OmniBeagleSquaredMBX-v3-7B-v2
base_model:
- eren23/dpo-binarized-NeutrixOmnibe-7B
- paulml/OmniBeagleSquaredMBX-v3-7B-v2
license: cc-by-nc-4.0
---
# DPOB-NMTOB-7B
DPOB-NMTOB-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [eren23/dpo-binarized-NeutrixOmnibe-7B](https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B)
* [paulml/OmniBeagleSquaredMBX-v3-7B-v2](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B-v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: eren23/dpo-binarized-NeutrixOmnibe-7B
layer_range: [0, 32]
- model: paulml/OmniBeagleSquaredMBX-v3-7B-v2
layer_range: [0, 32]
merge_method: slerp
base_model: eren23/dpo-binarized-NeutrixOmnibe-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "paulml/DPOB-NMTOB-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
liminerity/phigment6-slerp | liminerity | 2024-03-11T18:21:19Z | 723 | 3 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/phive",
"mobiuslabsgmbh/aanaphi2-v0.1",
"custom_code",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-25T05:16:11Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- liminerity/phive
- mobiuslabsgmbh/aanaphi2-v0.1
model-index:
- name: phigment6-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 62.63
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/phigment6-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/phigment6-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.65
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/phigment6-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 50.49
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/phigment6-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/phigment6-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/phigment6-slerp
name: Open LLM Leaderboard
---
RANKED NUMBER 1 FOR 3B MODELS!
# phigment6-slerp
Title: Creating the Number 1 3B Parameter LLM in the World - Phigment6, A Phi-2 Based Model Using Divergent Knowledge Enhancement through Retrograde Merging Strategies (DKERS) Methodology
Abstract
The rapid advancements in artificial intelligence have led to the development of large language models (LLMs). In this paper, we present Phigment6, an innovative 3 billion parameter LLM built on the foundation of the Phi-2 architecture. We detail our unique methodology called Divergent Knowledge Enhancement through Retrograde Merging Strategies (DKERS), which involves the strategic combination of multiple pretrained models to create an even more powerful and accurate language model. Through this approach, we successfully merge amu/dpo-phi2, g-ronimo/phi-2-OpenHermes-2.5, vince62s/phi-2-psy, and mobiuslabsgmbh/aanaphi2-v0.1, leading to the creation of Phigment6. Our results demonstrate significant improvements in performance compared to existing state-of-the-art LLMs.
Introduction
Recent years have witnessed tremendous growth in natural language processing capabilities, driven by advances in deep learning techniques and the introduction of transformers in NLP tasks. Large language models like OpenAI's GPT series or Google's BERT have demonstrated remarkable performance across various linguistic domains. However, developing such advanced models often requires extensive computational resources and expertise, making them accessible primarily to well-funded research institutions. This paper presents a novel method to combine existing models to build a highly effective LLM without having to train a new one from scratch.
Methodology: Divergent Knowledge Enhancement through Retrograde Merging Strategies (DKERS)
Our proposed approach, DKERS, consists of two main steps: merging and refining. Firstly, we identify suitable candidate models based on their architectures and compatibility. Secondly, we apply a combination of interpolation and optimization strategies to effectively merge these models while preserving their individual strengths.
Step 1: Candidate Selection
We begin by selecting four compatible models as potential candidates for merging:
amu/dpo-phi2: A baseline Phi-2 model, providing a strong foundation for further enhancement.
g-ronimo/phi-2-OpenHermes-2.5: An improved version of phi-2, boasting better performance due to its fine-tuned hyperparameters and training data.
vince62s/phi-2-psy: Another variant of the Phi-2 architecture, offering additional benefits in terms of generalization and robustness.
mobiuslabsgmbh/aanaphi2-v0.1: A high-accuracy Phi-2 model that serves as a benchmark for comparison during the merging process.
Step 2: Model Merging
To merge the selected models, we employ a strategy known as spherical linear interpolation (SLERP), which enables us to smoothly transition between the parameters of two models. Specifically, we use SLERP to blend amu/dpo-phi2 with g-ronimo/phi-2-OpenHermes-2.5. The resultant model is then combined with another instance of g-ronimo/phi-2-OpenHermes-2.5 using the same blending technique. Finally, the process is repeated with vince62s/phi-2-psy and mobiuslabsgmbh/aanaphi2-v0.1. Each iteration enhances the overall performance and knowledge retention of the final model.
Results
After following the DKERS methodology, we obtain Phigment6, a powerful and efficient 3 billion parameter LLM. Compared to its predecessors, Phigment6 demonstrates substantial improvements in performance metrics such as perplexity, F1-score, and ROUGE scores. Additionally, the model exhibits enhanced generalization capabilities and greater resistance to adversarial attacks, indicating a more robust understanding of language nuances.
Conclusion
In summary, we presented Phigment6, a cutting-edge 3 billion parameter LLM, constructed via the novel Divergent Knowledge Enhancement through Retrograde Merging Strategies (DKERS) methodology. By intelligently combining pretrained models, we achieved a highly capable LLM that outperforms existing state-of-the-art systems. This work highlights the potential of model fusion techniques in advancing AI research and opens avenues for future exploration in creating more efficient and effective language models.
phigment6-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [liminerity/phive](https://huggingface.co/liminerity/phive)
* [mobiuslabsgmbh/aanaphi2-v0.1](https://huggingface.co/mobiuslabsgmbh/aanaphi2-v0.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/phive
layer_range: [0, 32]
- model: mobiuslabsgmbh/aanaphi2-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/phive
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__phigment6-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.58|
|AI2 Reasoning Challenge (25-Shot)|62.63|
|HellaSwag (10-Shot) |77.25|
|MMLU (5-Shot) |58.65|
|TruthfulQA (0-shot) |50.49|
|Winogrande (5-shot) |73.88|
|GSM8k (5-shot) |58.61|
|
vicgalleorg/test1 | vicgalleorg | 2024-03-02T08:25:34Z | 723 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:vicgalle/RoleBeagle-11B",
"base_model:vicgalle/CarbonBeagle-11B-truthy",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-01T18:43:16Z | ---
base_model:
- vicgalle/RoleBeagle-11B
- vicgalle/CarbonBeagle-11B-truthy
tags:
- mergekit
- merge
license: apache-2.0
---
# test
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [vicgalle/RoleBeagle-11B](https://huggingface.co/vicgalle/RoleBeagle-11B)
* [vicgalle/CarbonBeagle-11B-truthy](https://huggingface.co/vicgalle/CarbonBeagle-11B-truthy)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: vicgalle/CarbonBeagle-11B-truthy
parameters:
weight: 0.5
- model: vicgalle/RoleBeagle-11B
parameters:
weight: 0.5
merge_method: linear
dtype: float16
``` |
Locutusque/Hyperion-3.0-Mistral-7B-alpha | Locutusque | 2024-03-22T22:49:19Z | 723 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"chemistry",
"medical",
"en",
"dataset:Locutusque/hyperion-v3.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-16T02:45:21Z | ---
library_name: transformers
tags:
- code
- chemistry
- medical
license: apache-2.0
datasets:
- Locutusque/hyperion-v3.0
language:
- en
widget:
- example_title: Reasoning
text: |
<|im_start|>user
An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation? A. Planetary density will decrease. B. Planetary years will become longer. C. Planetary days will become shorter. D. Planetary gravity will become stronger.<|im_end|>
<|im_start|>assistant
output:
text: |
C. Planetary days will become shorter.
The increase in rotation is due to the transfer of angular momentum from the meteorite impact to the planet. This means that more kinetic energy is imparted on the planet, making it spin faster. As a result, the planet's rotational period (the time it takes for the planet to complete one rotation) decreases. So, planetary days will become shorter.
A. Planetary density will not change significantly as a result of the meteorite impact. The increase in rotation is primarily an issue of angular momentum transfer, and density is mainly determined by the composition of the planet.
B. Planetary years would become longer if the planet took less time to rotate around its axis, since a year is based on how long it takes the planet to revolve around its star. However, meteorite impacts generally affect rotation, not revolution.
D. Planetary gravity would remain the same or possibly slightly decrease. A meteorite impact alone does not have a significant enough effect on the mass of a planet to alter its gravitational force appreciably.
---
# Hyperion-3.0-Mistral-7B-alpha
## Model Details
- **Model Name**: Locutusque/Hyperion-3.0-Mistral-7B-alpha
- **Base Model**: mistralai/Mistral-7B-v0.1
- **Publisher**: Locutusque
- **Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning.
- **Language**: Multi-domain, English language.
- **License**: Apache-2.0
## Model Description
`Locutusque/Hyperion-3.0-Mistral-7B-alpha` is a state-of-the-art language model fine-tuned on the Hyperion-v3.0 dataset for advanced reasoning across scientific domains. This model is designed to handle complex inquiries and instructions, leveraging the diverse and rich information contained in the Hyperion dataset. Its primary use cases include but are not limited to complex question answering, conversational understanding, code generation, medical text comprehension, mathematical reasoning, and logical reasoning. This model is designed to greatly outperform its predecessors.
## Intended Use
This model is intended for researchers and practitioners looking for a powerful tool to tackle challenging problems in scientific domains. It can be used in the following scenarios:
- AI-driven tutoring systems for science, medicine, mathematics, and computer science.
- Assistive tools for professionals requiring fast and accurate domain-specific information retrieval.
- Platforms that require conversational AI capabilities with a focus on technical and scientific reasoning.
- Automation in code generation and understanding complex programming context.
## Training Data
The `Locutusque/Hyperion-3.0-Mistral-7B-alpha` model was fine-tuned on 200,000 examples of the Hyperion-3.0 dataset, which amalgamates various datasets rich in diversity and complexity, including programming, medical texts, mathematical problems, and reasoning tasks.
## Quants
ExLlamaV2: https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-alpha-exl2
GGUF: https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-alpha-GGUF
## Evaluation Results
Zero-shot AGIEval
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------------------------------|-------|------|------|--------|-----:|---|-----:|
|agieval_nous |N/A |none |None |acc_norm|0.3500|± |0.0092|
| | |none |None |acc |0.3519|± |0.0093|
| - agieval_aqua_rat | 1|none |None |acc |0.2087|± |0.0255|
| | |none |None |acc_norm|0.2165|± |0.0259|
| - agieval_logiqa_en | 1|none |None |acc |0.3164|± |0.0182|
| | |none |None |acc_norm|0.3533|± |0.0187|
| - agieval_lsat_ar | 1|none |None |acc |0.2348|± |0.0280|
| | |none |None |acc_norm|0.2174|± |0.0273|
| - agieval_lsat_lr | 1|none |None |acc |0.3529|± |0.0212|
| | |none |None |acc_norm|0.3647|± |0.0213|
| - agieval_lsat_rc | 1|none |None |acc |0.4721|± |0.0305|
| | |none |None |acc_norm|0.4201|± |0.0301|
| - agieval_sat_en | 1|none |None |acc |0.6019|± |0.0342|
| | |none |None |acc_norm|0.6117|± |0.0340|
| - agieval_sat_en_without_passage| 1|none |None |acc |0.4078|± |0.0343|
| | |none |None |acc_norm|0.3835|± |0.0340|
| - agieval_sat_math | 1|none |None |acc |0.3091|± |0.0312|
| | |none |None |acc_norm|0.2364|± |0.0287|
5 shot CoT MMLU
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|-------------------------------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|mmlu_flan_cot_fewshot |N/A |get-answer| 0|exact_match|0.5924|± |0.0118|
| - mmlu_flan_cot_fewshot_humanities |N/A |get-answer| 0|exact_match|0.5077|± |0.0206|
| - mmlu_flan_cot_fewshot_formal_logic | 0|get-answer| 0|exact_match|0.2143|± |0.1138|
| - mmlu_flan_cot_fewshot_high_school_european_history | 0|get-answer| 0|exact_match|0.6111|± |0.1182|
| - mmlu_flan_cot_fewshot_high_school_us_history | 0|get-answer| 0|exact_match|0.7727|± |0.0914|
| - mmlu_flan_cot_fewshot_high_school_world_history | 0|get-answer| 0|exact_match|0.6154|± |0.0973|
| - mmlu_flan_cot_fewshot_international_law | 0|get-answer| 0|exact_match|0.9231|± |0.0769|
| - mmlu_flan_cot_fewshot_jurisprudence | 0|get-answer| 0|exact_match|0.3636|± |0.1521|
| - mmlu_flan_cot_fewshot_logical_fallacies | 0|get-answer| 0|exact_match|0.7222|± |0.1086|
| - mmlu_flan_cot_fewshot_moral_disputes | 0|get-answer| 0|exact_match|0.5526|± |0.0817|
| - mmlu_flan_cot_fewshot_moral_scenarios | 0|get-answer| 0|exact_match|0.3900|± |0.0490|
| - mmlu_flan_cot_fewshot_philosophy | 0|get-answer| 0|exact_match|0.7647|± |0.0738|
| - mmlu_flan_cot_fewshot_prehistory | 0|get-answer| 0|exact_match|0.7143|± |0.0775|
| - mmlu_flan_cot_fewshot_professional_law | 0|get-answer| 0|exact_match|0.3471|± |0.0366|
| - mmlu_flan_cot_fewshot_world_religions | 0|get-answer| 0|exact_match|0.8947|± |0.0723|
| - mmlu_flan_cot_fewshot_other |N/A |get-answer| 0|exact_match|0.6921|± |0.0240|
| - mmlu_flan_cot_fewshot_business_ethics | 0|get-answer| 0|exact_match|0.9091|± |0.0909|
| - mmlu_flan_cot_fewshot_clinical_knowledge | 0|get-answer| 0|exact_match|0.5517|± |0.0940|
| - mmlu_flan_cot_fewshot_college_medicine | 0|get-answer| 0|exact_match|0.7727|± |0.0914|
| - mmlu_flan_cot_fewshot_global_facts | 0|get-answer| 0|exact_match|0.6000|± |0.1633|
| - mmlu_flan_cot_fewshot_human_aging | 0|get-answer| 0|exact_match|0.6522|± |0.1015|
| - mmlu_flan_cot_fewshot_management | 0|get-answer| 0|exact_match|0.9091|± |0.0909|
| - mmlu_flan_cot_fewshot_marketing | 0|get-answer| 0|exact_match|0.8400|± |0.0748|
| - mmlu_flan_cot_fewshot_medical_genetics | 0|get-answer| 0|exact_match|1.0000|± |0.0000|
| - mmlu_flan_cot_fewshot_miscellaneous | 0|get-answer| 0|exact_match|0.7791|± |0.0450|
| - mmlu_flan_cot_fewshot_nutrition | 0|get-answer| 0|exact_match|0.6667|± |0.0833|
| - mmlu_flan_cot_fewshot_professional_accounting | 0|get-answer| 0|exact_match|0.4194|± |0.0901|
| - mmlu_flan_cot_fewshot_professional_medicine | 0|get-answer| 0|exact_match|0.6774|± |0.0853|
| - mmlu_flan_cot_fewshot_virology | 0|get-answer| 0|exact_match|0.3889|± |0.1182|
| - mmlu_flan_cot_fewshot_social_sciences |N/A |get-answer| 0|exact_match|0.6973|± |0.0239|
| - mmlu_flan_cot_fewshot_econometrics | 0|get-answer| 0|exact_match|0.3333|± |0.1421|
| - mmlu_flan_cot_fewshot_high_school_geography | 0|get-answer| 0|exact_match|0.9091|± |0.0627|
| - mmlu_flan_cot_fewshot_high_school_government_and_politics| 0|get-answer| 0|exact_match|0.8095|± |0.0878|
| - mmlu_flan_cot_fewshot_high_school_macroeconomics | 0|get-answer| 0|exact_match|0.6279|± |0.0746|
| - mmlu_flan_cot_fewshot_high_school_microeconomics | 0|get-answer| 0|exact_match|0.6154|± |0.0973|
| - mmlu_flan_cot_fewshot_high_school_psychology | 0|get-answer| 0|exact_match|0.9167|± |0.0360|
| - mmlu_flan_cot_fewshot_human_sexuality | 0|get-answer| 0|exact_match|0.5000|± |0.1508|
| - mmlu_flan_cot_fewshot_professional_psychology | 0|get-answer| 0|exact_match|0.6667|± |0.0572|
| - mmlu_flan_cot_fewshot_public_relations | 0|get-answer| 0|exact_match|0.5833|± |0.1486|
| - mmlu_flan_cot_fewshot_security_studies | 0|get-answer| 0|exact_match|0.4444|± |0.0975|
| - mmlu_flan_cot_fewshot_sociology | 0|get-answer| 0|exact_match|0.7727|± |0.0914|
| - mmlu_flan_cot_fewshot_us_foreign_policy | 0|get-answer| 0|exact_match|0.7273|± |0.1408|
| - mmlu_flan_cot_fewshot_stem |N/A |get-answer| 0|exact_match|0.5164|± |0.0265|
| - mmlu_flan_cot_fewshot_abstract_algebra | 0|get-answer| 0|exact_match|0.4545|± |0.1575|
| - mmlu_flan_cot_fewshot_anatomy | 0|get-answer| 0|exact_match|0.3571|± |0.1329|
| - mmlu_flan_cot_fewshot_astronomy | 0|get-answer| 0|exact_match|0.5000|± |0.1291|
| - mmlu_flan_cot_fewshot_college_biology | 0|get-answer| 0|exact_match|0.5625|± |0.1281|
| - mmlu_flan_cot_fewshot_college_chemistry | 0|get-answer| 0|exact_match|0.3750|± |0.1830|
| - mmlu_flan_cot_fewshot_college_computer_science | 0|get-answer| 0|exact_match|0.2727|± |0.1408|
| - mmlu_flan_cot_fewshot_college_mathematics | 0|get-answer| 0|exact_match|0.2727|± |0.1408|
| - mmlu_flan_cot_fewshot_college_physics | 0|get-answer| 0|exact_match|0.4545|± |0.1575|
| - mmlu_flan_cot_fewshot_computer_security | 0|get-answer| 0|exact_match|0.7273|± |0.1408|
| - mmlu_flan_cot_fewshot_conceptual_physics | 0|get-answer| 0|exact_match|0.6154|± |0.0973|
| - mmlu_flan_cot_fewshot_electrical_engineering | 0|get-answer| 0|exact_match|0.6875|± |0.1197|
| - mmlu_flan_cot_fewshot_elementary_mathematics | 0|get-answer| 0|exact_match|0.7317|± |0.0701|
| - mmlu_flan_cot_fewshot_high_school_biology | 0|get-answer| 0|exact_match|0.7188|± |0.0808|
| - mmlu_flan_cot_fewshot_high_school_chemistry | 0|get-answer| 0|exact_match|0.3636|± |0.1050|
| - mmlu_flan_cot_fewshot_high_school_computer_science | 0|get-answer| 0|exact_match|0.6667|± |0.1667|
| - mmlu_flan_cot_fewshot_high_school_mathematics | 0|get-answer| 0|exact_match|0.4138|± |0.0931|
| - mmlu_flan_cot_fewshot_high_school_physics | 0|get-answer| 0|exact_match|0.2353|± |0.1060|
| - mmlu_flan_cot_fewshot_high_school_statistics | 0|get-answer| 0|exact_match|0.4348|± |0.1057|
| - mmlu_flan_cot_fewshot_machine_learning | 0|get-answer| 0|exact_match|0.3636|± |0.1521|
| Groups |Version| Filter |n-shot| Metric |Value | |Stderr|
|----------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|mmlu_flan_cot_fewshot |N/A |get-answer| 0|exact_match|0.5924|± |0.0118|
| - mmlu_flan_cot_fewshot_humanities |N/A |get-answer| 0|exact_match|0.5077|± |0.0206|
| - mmlu_flan_cot_fewshot_other |N/A |get-answer| 0|exact_match|0.6921|± |0.0240|
| - mmlu_flan_cot_fewshot_social_sciences|N/A |get-answer| 0|exact_match|0.6973|± |0.0239|
| - mmlu_flan_cot_fewshot_stem |N/A |get-answer| 0|exact_match|0.5164|± |0.0265|
## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Locutusque/Hyperion-3.0-Mistral-7B-alpha"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# For a text generation task
input_text = "<|im_start|>user\nWhat are the implications of Einstein's theory of relativity in modern physics?<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate a response
outputs = model.generate(input_ids, max_length=200, num_return_sequences=1, temperature=0.8, top_p=0.95, top_k=40, repetition_penalty=1.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Known Limitations
The diversity of the dataset could lead to inconsistencies in the model's responses due to variations in data formatting and annotation quality.
This model is also very compliant, it will respond to any request. Please make sure to build upon this model with DPO if you plan on using it for enterprise-level deployment.
## Licensing Information
This model is released under the Apache-2.0 license. |
Hertz/Mistral-Hermes-2x7b | Hertz | 2024-03-18T07:17:40Z | 723 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-18T06:59:15Z | ---
license: apache-2.0
---
# JUST A TEST
---
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-v0.1
- NousResearch/Hermes-2-Pro-Mistral-7B
base_model:
- mistralai/Mistral-7B-v0.1
- NousResearch/Hermes-2-Pro-Mistral-7B
---
# Mistral-Hermes-2x7b
Mistral-Hermes-2x7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [0, 32]
- model: NousResearch/Hermes-2-Pro-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Hertz/Mistral-Hermes-2x7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ToastyPigeon/SmolPlatypus-1.5B | ToastyPigeon | 2024-03-19T23:11:29Z | 723 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"dataset:garage-bAInd/Open-Platypus",
"base_model:ToastyPigeon/SmolLlama-1.5B",
"base_model:ToastyPigeon/SmolPlatypus-1.5B-LoRA",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-19T22:31:29Z | ---
base_model:
- ToastyPigeon/SmolLlama-1.5B
- ToastyPigeon/SmolPlatypus-1.5B-LoRA
tags:
- mergekit
- merge
license: apache-2.0
datasets:
- garage-bAInd/Open-Platypus
---
# SmolPlatypus-1.5b
This is a proof-of-concept model and should not be used for anything.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The LoRA adapter was created with axolotl using qlora (I know, it's misnamed) training a solar-style stack merge dubbed "SmolLlama-1.5B" on the Open-Platypus dataset for approximately 2 hours on 2x RTX 3060.
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [ToastyPigeon/SmolLlama-1.5B](https://huggingface.co/ToastyPigeon/SmolLlama-1.5B) + [ToastyPigeon/SmolPlatypus-1.5B-LoRA](https://huggingface.co/ToastyPigeon/SmolPlatypus-1.5B-LoRA)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ToastyPigeon/SmolLlama-1.5B+ToastyPigeon/SmolPlatypus-1.5B-LoRA
merge_method: passthrough
dtype: float16
``` |
Inv/Konstanta-V4-Alpha-7B | Inv | 2024-03-25T14:33:41Z | 723 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"senseable/WestLake-7B-v2",
"KatyTheCutie/LemonadeRP-4.5.3",
"roleplay",
"rp",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Inv/Konstanta-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T14:44:57Z | ---
base_model:
- Inv/Konstanta-7B
library_name: transformers
tags:
- mergekit
- merge
- senseable/WestLake-7B-v2
- KatyTheCutie/LemonadeRP-4.5.3
- roleplay
- rp
license: apache-2.0
---
# Konstanta-V4-Alpha-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Alright, so, this model seems to be REALLY good. Konstanta-7B is pretty good either, but this one is still marginally better.
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Inv/Konstanta-7B](https://huggingface.co/Inv/Konstanta-7B) as a base.
### Models Merged
The following models were included in the merge:
* senseable/WestLake-7B-v2
* KatyTheCutie/LemonadeRP-4.5.3
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
dtype: bfloat16
parameters:
int8_mask: true
base_model: Inv/Konstanta-7B
models:
- model: Inv/Konstanta-7B
- model: KatyTheCutie/LemonadeRP-4.5.3
parameters:
density: 0.65
weight: [0.65, 0.40, 0.35, 0.30, 0.35, 0.40, 0.25]
- model: senseable/WestLake-7B-v2
parameters:
density: 0.85
weight: [0.25, 0.40, 0.35, 0.30, 0.35, 0.40, 0.65]
``` |
Kukedlc/Neural-4-Maths-7b | Kukedlc | 2024-04-08T03:51:01Z | 723 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/M7-7b",
"MTSAIR/multi_verse_model",
"Kukedlc/NeuralSirKrishna-7b",
"Kukedlc/NeuralMaths-Experiment-7b",
"Kukedlc/Neural4gsm8k",
"base_model:liminerity/M7-7b",
"base_model:MTSAIR/multi_verse_model",
"base_model:Kukedlc/NeuralSirKrishna-7b",
"base_model:Kukedlc/NeuralMaths-Experiment-7b",
"base_model:Kukedlc/Neural4gsm8k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-30T05:19:20Z | ---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/M7-7b
- MTSAIR/multi_verse_model
- Kukedlc/NeuralSirKrishna-7b
- Kukedlc/NeuralMaths-Experiment-7b
- Kukedlc/Neural4gsm8k
base_model:
- liminerity/M7-7b
- MTSAIR/multi_verse_model
- Kukedlc/NeuralSirKrishna-7b
- Kukedlc/NeuralMaths-Experiment-7b
- Kukedlc/Neural4gsm8k
license: apache-2.0
---
# Neural-4-Maths-7b
Neural-4-Maths-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model)
* [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b)
* [Kukedlc/NeuralMaths-Experiment-7b](https://huggingface.co/Kukedlc/NeuralMaths-Experiment-7b)
* [Kukedlc/Neural4gsm8k](https://huggingface.co/Kukedlc/Neural4gsm8k)
## 🧩 Configuration
```yaml
models:
- model: Kukedlc/NeuralSirKrishna-7b
# No parameters necessary for base model
- model: liminerity/M7-7b
parameters:
density: 0.66
weight: 0.2
- model: MTSAIR/multi_verse_model
parameters:
density: 0.66
weight: 0.2
- model: Kukedlc/NeuralSirKrishna-7b
parameters:
density: 0.66
weight: 0.2
- model: Kukedlc/NeuralMaths-Experiment-7b
parameters:
density: 0.44
weight: 0.2
- model: Kukedlc/Neural4gsm8k
parameters:
density: 0.44
weight: 0.2
merge_method: dare_ties
base_model: Kukedlc/NeuralSirKrishna-7b
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/Neural-4-Maths-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
leejaymin/etri-ones-solar | leejaymin | 2024-04-03T01:02:04Z | 723 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:instruction",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-31T19:57:59Z | ---
language:
- ko
datasets:
- instruction
library_name: transformers
pipeline_tag: text-generation
license: mit
---
# **etri-ones-solar**
## Model Details
**Model Developers**
- the model is fine-tuned by open instruction dataset
**Model Architecture**
- this model is an auto-regressive language model based on the solar transformer architecture.
**Base Model**
- solar https://huggingface.co/upstage/SOLAR-10.7B-v1.0
**Training Dataset**
-
---
# Model comparisons1
> comming soon
| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| **[...your_model_name...]** | NaN | NaN | NaN | NaN | NaN | NaN |
---
# Model comparisons2
> AI-Harness evaluation; [link](https://github.com/Beomi/ko-lm-evaluation-harness)
| Model | Copa | Copa | HellaSwag | HellaSwag | BoolQ | BoolQ | Sentineg | Sentineg |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot |
| **[...your_model_name...]** | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
---
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "[...your_model_repo...]"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
|
MiniMoog/Mergerix-7b-v0.1 | MiniMoog | 2024-04-02T19:28:30Z | 723 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralSirKrishna-7b-DPO",
"Kukedlc/NeuralAlgo-7B-DPO",
"base_model:Kukedlc/NeuralSirKrishna-7b-DPO",
"base_model:Kukedlc/NeuralAlgo-7B-DPO",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-02T19:21:13Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralSirKrishna-7b-DPO
- Kukedlc/NeuralAlgo-7B-DPO
base_model:
- Kukedlc/NeuralSirKrishna-7b-DPO
- Kukedlc/NeuralAlgo-7B-DPO
license: apache-2.0
---
# Mergerix-7b-v0.1
Mergerix-7b-v0.1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuralSirKrishna-7b-DPO](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b-DPO)
* [Kukedlc/NeuralAlgo-7B-DPO](https://huggingface.co/Kukedlc/NeuralAlgo-7B-DPO)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Kukedlc/NeuralSirKrishna-7b-DPO
layer_range: [0, 32]
- model: Kukedlc/NeuralAlgo-7B-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: Kukedlc/NeuralSirKrishna-7b-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MiniMoog/Mergerix-7b-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Kukedlc/NeuralSynthesis-7B-v0.3 | Kukedlc | 2024-04-07T00:46:45Z | 723 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-07T00:26:50Z | ---
tags:
- merge
- mergekit
- lazymergekit
license: apache-2.0
---
# NeuralSynthesis-7B-v0.3

NeuralSynthesis-7B-v0.3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: MiniMoog/Mergerix-7b-v0.3
- model: automerger/Ognoexperiment27Multi_verse_model-7B
- model: nlpguy/T3QM7
- model: Kukedlc/NeuralSynthesis-7B-v0.1
- model: liminerity/M7-7b
- model: automerger/YamshadowExperiment28-7B
merge_method: model_stock
base_model: automerger/YamshadowExperiment28-7B
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralSynthesis-7B-v0.3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ShenaoZhang/0.001_idpo_noreplacerej_iter_1 | ShenaoZhang | 2024-04-07T20:40:22Z | 723 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-07T16:57:31Z | ---
license: mit
base_model: HuggingFaceH4/mistral-7b-sft-beta
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: 0.001_idpo_noreplacerej_iter_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_idpo_noreplacerej_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
bartowski/Llama-3-8B-Synthia-v3.5-GGUF | bartowski | 2024-05-17T17:04:28Z | 723 | 2 | null | [
"gguf",
"text-generation",
"license:llama3",
"region:us"
] | text-generation | 2024-05-17T16:47:37Z | ---
license: llama3
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Llama-3-8B-Synthia-v3.5
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2901">b2901</a> for quantization.
Original model: https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-8B-Synthia-v3.5-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-8B-Synthia-v3.5-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-8B-Synthia-v3.5-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Llama-3-8B-Synthia-v3.5-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Llama-3-8B-Synthia-v3.5-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-8B-Synthia-v3.5-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-8B-Synthia-v3.5-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Llama-3-8B-Synthia-v3.5-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-8B-Synthia-v3.5-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-8B-Synthia-v3.5-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Llama-3-8B-Synthia-v3.5-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-8B-Synthia-v3.5-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Llama-3-8B-Synthia-v3.5-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Llama-3-8B-Synthia-v3.5-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-8B-Synthia-v3.5-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-8B-Synthia-v3.5-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Llama-3-8B-Synthia-v3.5-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-8B-Synthia-v3.5-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-Synthia-v3.5-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-Synthia-v3.5-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-Synthia-v3.5-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Llama-3-8B-Synthia-v3.5-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama-3-8B-Synthia-v3.5-GGUF --include "Llama-3-8B-Synthia-v3.5-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama-3-8B-Synthia-v3.5-GGUF --include "Llama-3-8B-Synthia-v3.5-Q8_0.gguf/*" --local-dir Llama-3-8B-Synthia-v3.5-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (Llama-3-8B-Synthia-v3.5-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
John6666/ebara-pony-v21-sdxl-fp32 | John6666 | 2024-06-07T12:19:01Z | 723 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-06-07T11:58:59Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://huggingface.co/tsukihara/xl_model).
|
mkpvishnu/miniLM-go_Emotions | mkpvishnu | 2024-06-13T16:57:24Z | 723 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"sentiment",
"en",
"dataset:google-research-datasets/go_emotions",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-12T08:43:21Z | ---
license: mit
datasets:
- google-research-datasets/go_emotions
language:
- en
library_name: transformers
tags:
- sentiment
---
# Fine-Tuned MiniLM for GoEmotions Sentiment Analysis
This repository contains a fine-tuned version of Microsoft's MiniLM-v2 model, specifically optimized for sentiment analysis using the GoEmotions dataset. The model is capable of classifying text into the following emotional/sentiment categories:
This model is just **90MB** making it ideal for memory constraint environments.
* anger
* approval
* confusion
* disappointment
* disapproval
* gratitude
* joy
* sadness
* neutral
These sentiments more or less cover all the sentiments that can be in a sentence. Useful for validating sentiment analysis models.
Label Analogy when using Inference:
```
{
"LABEL_0":anger,
"LABEL_1":approval,
"LABEL_2":confusion,
"LABEL_3":disappointment,
"LABEL_4":disapproval,
"LABEL_5":gratitude,
"LABEL_6":joy,
"LABEL_7":sadness,
"LABEL_8":neutral
}
```
## Why MiniLM?
MiniLM is a distilled version of larger language models like BERT and RoBERTa. It strikes a remarkable balance between performance and efficiency:
* **Reduced Size:** MiniLM is significantly smaller than its parent models, making it faster to load and deploy, especially in resource-constrained environments.
* **Comparable Performance:** Despite its compact size, MiniLM maintains surprisingly high accuracy on various natural language processing (NLP) tasks, including sentiment analysis.
* **Distillation Power:** MiniLM's distillation technique ensures that it captures the essential knowledge of larger models, making it a potent tool for real-world applications.
## GoEmotions Dataset
google-research-datasets/go_emotions
The GoEmotions dataset is a valuable resource for sentiment analysis. It consists of thousands of Reddit comments labeled with the nine emotional/sentiment classes listed above. This dataset's richness in diverse expressions of emotions makes it an ideal choice for training a versatile sentiment analysis model.
## Training Procedure
1. **Data Preprocessing:** The GoEmotions dataset was preprocessed to ensure consistency and remove noise.
2. **Tokenizer:** The MiniLM-v2 tokenizer was used to convert text into numerical representations suitable for the model.
3. **Fine-Tuning:** The MiniLM-v2 model was fine-tuned on the GoEmotions dataset using a standard training loop. The model's parameters were adjusted to optimize its performance on sentiment classification.
4. **Evaluation:** The fine-tuned model was evaluated on a held-out test set to measure its accuracy and generalization capabilities.
## How to Use This Model
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
required_sentiments = ['anger', 'approval', 'confusion', 'disappointment', 'disapproval', 'gratitude', 'joy', 'sadness', 'neutral']
model = AutoModelForSequenceClassification.from_pretrained('./saved_model')
tokenizer = AutoTokenizer.from_pretrained('./saved_model')
text = "How can you be so careless"
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding='max_length', max_length=128)
model.eval()
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=-1).item()
# Map the label to sentiment
label_mapping = {idx: sentiment for idx, sentiment in enumerate(required_sentiments)}
predicted_sentiment = label_mapping[predictions]
print(f'Text: {text}')
print(f'Predicted Sentiment: {predicted_sentiment}') |
ugurcelebi/DevOpsGPT-1.1-q8_0 | ugurcelebi | 2024-06-22T14:43:20Z | 723 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2-7B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T14:39:36Z | ---
base_model: unsloth/Qwen2-7B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
---
# Uploaded model
- **Developed by:** ugurcelebi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-7B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
reasonwang/EleutherAI-gpt-neo-1.3B-alpaca | reasonwang | 2023-08-07T21:13:27Z | 722 | 1 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-07T21:03:23Z | Entry not found |
SaiRaj03/Text_To_Image | SaiRaj03 | 2023-10-25T17:33:58Z | 722 | 12 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-10-20T18:03:10Z | ---
tags:
- text-to-image
pinned: true
pipeline_tag: text-to-image
---
# Text To Image
|
nisten/quad-mixtrals-gguf | nisten | 2023-12-27T19:14:16Z | 722 | 32 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2023-12-22T23:36:12Z | ---
license: apache-2.0
---
**Experimental quants of 4 expert MoE mixtrals in various GGUF formats.**
Original model used for custom quants: ***NeverSleep/Mistral-11B-SynthIAirOmniMix***
https://huggingface.co/NeverSleep/Mistral-11B-SynthIAirOmniMix
**Goal is to have the best performing MoE < 10gb**
Experimental q8 and q4 files for training/finetuning too.
***No sparsity tricks yet.***
8.4gb custom 2bit quant works ok up until 512 token length then starts looping.
- Install llama.cpp from github and run it:
```bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make -j
wget https://huggingface.co/nisten/quad-mixtrals-gguf/resolve/main/4mixq2.gguf
./server -m 4mixq2.gguf --host "my.internal.ip.or.my.cloud.host.name.goes.here.com" -c 512
```
limit output to 500 tokens |
nlpguy/Lelantos-low-tune | nlpguy | 2024-03-04T13:47:42Z | 722 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:openaccess-ai-collective/DPOpenHermes-7B-v2",
"base_model:SanjiWatsuki/Lelantos-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-06T17:24:45Z | ---
license: apache-2.0
tags:
- mergekit
- merge
base_model:
- openaccess-ai-collective/DPOpenHermes-7B-v2
- SanjiWatsuki/Lelantos-7B
model-index:
- name: Lelantos-low-tune
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.06
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Lelantos-low-tune
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.06
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Lelantos-low-tune
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Lelantos-low-tune
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.33
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Lelantos-low-tune
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Lelantos-low-tune
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Lelantos-low-tune
name: Open LLM Leaderboard
---
# mergedtwo
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [openaccess-ai-collective/DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2) as a base.
### Models Merged
The following models were included in the merge:
* merged
* [SanjiWatsuki/Lelantos-7B](https://huggingface.co/SanjiWatsuki/Lelantos-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: openaccess-ai-collective/DPOpenHermes-7B-v2
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: openaccess-ai-collective/DPOpenHermes-7B-v2
- layer_range: [0, 32]
model: merged
parameters:
weight: 0.5
- layer_range: [0, 32]
model: SanjiWatsuki/Lelantos-7B
parameters:
weight: 0.5
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__Lelantos-low-tune)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.82|
|AI2 Reasoning Challenge (25-Shot)|67.06|
|HellaSwag (10-Shot) |86.06|
|MMLU (5-Shot) |64.11|
|TruthfulQA (0-shot) |61.33|
|Winogrande (5-shot) |79.56|
|GSM8k (5-shot) |66.79|
|
CultriX/Wernicke-7B-dpo | CultriX | 2024-01-31T22:41:52Z | 722 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"FelixChao/WestSeverus-7B-DPO-v2",
"CultriX/Wernicke-7B-v8",
"vanillaOVO/supermario_v2",
"base_model:FelixChao/WestSeverus-7B-DPO-v2",
"base_model:CultriX/Wernicke-7B-v8",
"base_model:vanillaOVO/supermario_v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-31T19:12:33Z | ---
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- CultriX/Wernicke-7B-v8
- vanillaOVO/supermario_v2
base_model:
- FelixChao/WestSeverus-7B-DPO-v2
- CultriX/Wernicke-7B-v8
- vanillaOVO/supermario_v2
license: apache-2.0
---
# Edit:
* DPO-finetune using truthy-dpo dataset and CultriX/Wernicke-7B-v9.
# Wernicke-7B-v9
Wernicke-7B-v9 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [CultriX/Wernicke-7B-v8](https://huggingface.co/CultriX/Wernicke-7B-v8)
* [vanillaOVO/supermario_v2](https://huggingface.co/vanillaOVO/supermario_v2)
## 🧩 Configuration
```yaml
models:
- model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
density: 0.50
weight: 0.35
- model: CultriX/Wernicke-7B-v8
parameters:
density: 0.50
weight: 0.35
- model: vanillaOVO/supermario_v2
parameters:
density: 0.50
weight: 0.30
merge_method: dare_ties
base_model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/Wernicke-7B-v9"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Kquant03/Azathoth-16x7B-bf16 | Kquant03 | 2024-02-29T02:13:22Z | 722 | 2 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"conversational",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-03T22:56:50Z | ---
license: apache-2.0
language:
- en
tags:
- moe
- merge
---

# W�RNING:r0Ot:ALL lay3rs have degenerate rou�ing parameters - your prompts m@y be too simi-��-[�#$](https://huggingface.co/Kquant03/Azathoth-16x7B-bf16/blob/main/mergekit_moe_config.yml)
[Join our Discord!](https://discord.gg/uT4CzytfYW)
After testing with these models, I would like to see if there is a limit to prompting with mergekit MoE. This will be the first of those tests.
The config looks like this...(detailed version is in the files and versions):
- [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - base
- [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - expert #1
- [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #2
- [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #3
- [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - expert #4
- [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) - expert #5
- [senseable/garten2-7b](https://huggingface.co/senseable/garten2-7b) - expert #6
- [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) - expert #7
- [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - expert #8
- [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) - expert #9
- [senseable/garten2-7b](https://huggingface.co/senseable/garten2-7b) - expert #10
- [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #11
- [FelixChao/Severus-7B](https://huggingface.co/FelixChao/Severus-7B) - expert #12
- [ConvexAI/Metabird-7B](https://huggingface.co/ConvexAI/Metabird-7B) - expert #13
- [FelixChao/Severus-7B](https://huggingface.co/FelixChao/Severus-7B) - expert #14
- [ConvexAI/Metabird-7B](https://huggingface.co/ConvexAI/Metabird-7B) - expert #15
- [alnrg2arg/test3_sft_16bit](https://huggingface.co/alnrg2arg/test3_sft_16bit) - expert #16
# Uploading it to the leaderboard. It will not convert to gguf.
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. |
norallm/normistral-7b-warm | norallm | 2024-06-21T18:48:02Z | 722 | 26 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"mistral",
"text-generation",
"gpt",
"generative",
"no",
"nb",
"nn",
"dataset:uonlp/CulturaX",
"dataset:NbAiLab/NCC",
"dataset:vikp/starcoder_filtered",
"arxiv:2204.02311",
"arxiv:2005.14165",
"arxiv:2302.01398",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T23:10:48Z | ---
language:
- 'no'
- nb
- nn
inference: true
tags:
- mistral
- gpt
- generative
license: apache-2.0
pipeline_tag: text-generation
datasets:
- uonlp/CulturaX
- NbAiLab/NCC
- vikp/starcoder_filtered
---
# **NorMistral-7b-warm**
<img align="center" src="https://huggingface.co/ltg/norbert3-base/resolve/main/norbert.png" width=12.5%>
NorMistral-7b-warm is a large Norwegian language model initialized from [Mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and
continuously pretrained on a total of 260 billion subword tokens (using six repetitions of open Norwegian texts).
This model is a part of the NORA.LLM family developed in collaboration between [the Language Technology Group at the University of Oslo](https://huggingface.co/ltg), [the High Performance Language Technologies (HPLT) project](https://hplt-project.org/), [the National Library of Norway](https://huggingface.co/NbAiLab), and [the University of Turku](https://huggingface.co/TurkuNLP).
All the models are pre-trained on the same dataset and with the same tokenizer.
NorMistral-7b-warm has over 7 billion parameters and is based on [the Mistral architecture](https://huggingface.co/mistralai/Mistral-7B-v0.1).
The NORA.LLM language model family includes (as of now):
- [**NorMistral-7b-warm**](https://huggingface.co/norallm/normistral-7b-warm) -- an LLM initialized from [Mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and continuously pretrained on Norwegian data;
- [**NorMistral-7b-scratch**](https://huggingface.co/norallm/normistral-7b-scratch) -- a Mistral-based LLM pretrained from scratch on Norwegian data;
- [**NorBLOOM-7b-scratch**](https://huggingface.co/norallm/NorBLOOM-7b-scratch) -- a BLOOM-based LLM pretrained from scratch on Norwegian data.
*Disclaimer: This model is pretrained on raw (mostly web-based) textual data.
It is not finetuned to follow instructions, and it can generate harmful completions after inappropriate user prompts.
It is primarily intended for research purposes.*
_____
## Pretraining corpus
The model is continually pretrained exclusively on publicly available data. We combine the resources from [the public part of the NCC corpus](https://huggingface.co/datasets/NbAiLab/NCC), from [the cleaned HPLT corpus](https://hplt-project.org/datasets/v1.2), and from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
This resulted in over 34B subword tokens of Norwegian (Bokmål or Nynorsk) in total, which amounts to about 26.7B whitespace-separated tokens.
We also augment the corpus with [Starcoder](https://huggingface.co/datasets/vikp/starcoder_filtered); 20% of the 260B tokens are sampled from this code corpus.
The natural language data is repeated six times to get the pretraining budget of 260B tokens, in accordance with findings from [Muennighoff et al. (2023)](https://neurips.cc/virtual/2023/poster/70706).
_____
## Model details
**Model Developers:** Language Technology Group at the University of Oslo.
**Variations:** NorMistral is currently published as two 7B variants: one trained entirely from *scratch* and one *warm*-started from the Mistral model.
**Input:** Textual input.
**Output:** Generated text.
**Model Architecture:** NorMistral is an auto-regressive language model that uses an optimized transformer architecture based on the Mistral/Llama language models.
||Training Data|Params|Context Length|Tokens|LR|
|---|---|---|---|---|---|
|NorMistral-7b-warm|NCC+HPLT+CulturaX+Starcoder|7B|2k|260B|1.0 x 10<sup>-4</sup>|
|NorMistral-7b-scratch|NCC+HPLT+CulturaX+Starcoder|7B|2k|260B|3.0 x 10<sup>-4</sup>|
|NorBLOOM-7b-scratch|NCC+HPLT+CulturaX+Starcoder|7B|2k|260B|1.2 x 10<sup>-4</sup>|
**Tokenizer:** Byte-based BPE tokenizer trained on the same Norwegian corpus as this model. The vocabulary size is 32,768 tokens.
**Training FLOPs** The approximate amount is 1.22e+22 FLOPs; calculated as in [Chowdhery et al. (2022)](https://arxiv.org/abs/2204.02311).
**Model Dates:** The models were pretrained between December 2023 and January 2024.
**Status:** These are only pretrained language models; instruction-finetuned models will follow soon.
**License:** [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0)
**Research Paper:** Forthcoming
_____
## Initial evaluation
*Disclaimer: our model evaluation is an ongoing phase and is not claimed to be exhaustive. We provide our initial evaluation results on standard natural language understanding and generation tasks, and our evaluation design will be extended.
The user should perform evaluation for their particular model application scenario, including safety and bias evaluations.*
The perplexity on the heldout [validation set from the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC) is 7.43 and the final training perplexity is 4.76.
Our initial downstream evaluation is conducted on reading comprehension, sentiment analysis and machine translation tasks using open-source peer-reviewed datasets and benchmarks in native Norwegian.
We release [our codebase here](https://github.com/ltgoslo/norallm). We compare against other pretrained generative language models that officially support Norwegian: [NB-GPT-J](https://huggingface.co/NbAiLab/nb-gpt-j-6B), [GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b), [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2), and [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b); we also include evaluation of [Mistral-7b-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
### Sentiment analysis
[NoReC](https://huggingface.co/datasets/ltg/norec_sentence) ([Øvrelid et al., 2020](https://aclanthology.org/2020.lrec-1.618/)) is a dataset for sentence-level sentiment analysis derived from the Norwegian Review Corpus [(Velldal et al., 2018)](https://aclanthology.org/L18-1661/).
We use the binary formulation of this task (positive vs. negative).
<details>
<summary>Method (click to expand)</summary>
* Evaluation setting: zero-shot and few-shot perplexity-based evaluation.
* Prompt: ```"Tekst: {text}\nSentiment:{label}"```, where the ```label``` is either "positiv" or "negativ".
* Few-shot results show the average scores across 5 repetitions
* Evaluation script: https://github.com/ltgoslo/norallm/blob/main/initial_evaluation/sentiment_analysis.py
* Performance metric: macro-averaged F1-score.
</details>
<details open>
<summary>Macro-averaged F1-scores on the sentence-level sentiment analysis task (NoReC)</summary>
|Model|0-shot (macro F1)|1-shot (macro F1)|16-shot (macro F1)|
|---|---|---|---|
|NorMistral-7b-warm|60.6|**77.8**|**87.3**|
|NorMistral-7b-scratch|47.3|62.2|80.1|
|NorBLOOM-7b|**75.7**|73.8|65.5|
|NB-GPT-J|48.4|56.5|65.2|
|GPT-Sw3-6.7B|61.5|72.2|76.5|
|GPT-Sw3-6.7B-v2|42.4|69.1|83.4|
|Falcon-7B|53.3|61.6|74.9|
|Mistral-7B-v0.1|70.2|72.9|84.8|
</details>
### Reading comprehension
[NorQuAD](https://huggingface.co/datasets/ltg/norquad) ([Ivanova et al., 2023](https://aclanthology.org/2023.nodalida-1.17/)) is a dataset for extractive question answering in Norwegian designed similarly to [SQuAD (Rajpurkar et al., 2016)](https://aclanthology.org/D16-1264/).
<details>
<summary>Method (click to expand)</summary>
* Evaluation setting: zero-shot and few-shot settings via natural language generation using the greedy decoding strategy.
* Prompt: ```"Tittel: {title}\n\nTekst: {text}\n\nSpørsmål: {question}\n\nSvar:{answer}"``` Based on [Brown et al. (2020)](https://arxiv.org/abs/2005.14165).
* Few-shot results show the average scores across 5 repetitions
* Evaluation script: https://github.com/ltgoslo/norallm/blob/main/initial_evaluation/norquad.py
* Performance metrics: macro-averaged F1-score and exact match (EM).
</details>
<details open>
<summary>Performance results on the extractive question answering task (NorQuAD)</summary>
|Model|0-shot (F1/EM)|1-shot (F1/EM)|2-shot (F1/EM)|
|---|---|---|---|
|NorMistral-7b-warm|**48.6**/**24.8**|63.6/40.0|66.5/43.8|
|NorMistral-7b-scratch|34.0/15.7|46.5/25.8|48.5/27.8|
|NorBLOOM-7b|35.0/13.3|47.7/28.0|49.3/30.1|
|NB-GPT-J|24.4/6.8|32.8/11.6|35.0/12.3|
|GPT-Sw3-6.7B|46.5/22.0|55.9/32.0|58.1/34.3|
|GPT-Sw3-6.7B-v2|46.9/22.5|61.1/38.9|66.0/44.5|
|Falcon-7B|15.8/7.0|27.3/13.9|27.4/13.1|
|Mistral-7B-v0.1|46.4/22.4|**64.9**/**41.1**|**71.7**/**49.4**|
</details>
### Grammatical error correction
[ASK-RAW](https://huggingface.co/datasets/ltg/ask-gec) is dataset for Norwegian grammatical error correction (GEC) created by [Matias Jentoft (2023)](https://www.duo.uio.no/handle/10852/103885).
<details>
<summary>Method (click to expand)</summary>
* Evaluation setting: zero-shot and few-shot settings via natural language generation using the greedy decoding strategy.
* Prompt: ```"Her er eksempler på perfekt korrigering av grammatiske feil:\n\nTekst: {source_text}\nKorreksjon:{target_text}"```
* Few-shot results show the average scores across 5 repetitions
* Evaluation script: https://github.com/ltgoslo/norallm/blob/main/initial_evaluation/gec.py
* Performance metrics: the evaluation metric uses [ERRANT](https://github.com/chrisjbryant/errant/tree/main), which identifies edit-spans and then calculates the F_{0.5} scores between the gold edits and predicted edits.
</details>
<details open>
<summary>Results on [the ASK corpus](https://huggingface.co/datasets/ltg/ask-gec) (ERRANT F_{0.5})</summary>
|Model|0-shot (F0.5)|1-shot (F0.5)|32-shot (F0.5)|
|---|---|---|---|
|NorMistral-7b-warm|**40.8**|41.8|48.5|
|NorMistral-7b-scratch|22.1|28.8|42.1|
|NorBLOOM-7b|8.7|24.5|32.0|
|NB-GPT-J|9.1|28.2|30.6|
|GPT-Sw3-6.7B|30.5|42.9|**50.6**|
|GPT-Sw3-6.7B-v2|40.6|**43.4**|49.8|
|Falcon-7B|10.8|12.4|15.5|
|Mistral-7B-v0.1|26.0|27.4|30.6|
</details>
### Machine translation
[Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt) [(Tiedemann, 2020)](https://aclanthology.org/2020.wmt-1.139/) is a benchmark for machine translation, which includes hundreds of language pairs. We consider six language pairs (English <-> Bokmål, English <-> Nynorsk, and Bokmål <-> Nynorsk).
<details>
<summary>Method (click to expand)</summary>
* Evaluation setting: zero-shot and few-shot settings via natural language generation using the greedy decoding strategy.
* Prompt: ```"{source_language}: {source_text}\n{target_language}:{target_text}"```, where the ```source_language``` and ```target_language``` are ```Engelsk```, ```Bokmål```, or ```Nynorsk```. Based on [Garcia et al. (2023)](https://arxiv.org/abs/2302.01398).
* Few-shot results show the average scores across 5 repetitions
* Evaluation script: https://github.com/ltgoslo/norallm/blob/main/initial_evaluation/machine_translation.py
* Performance metrics: BLEU ([Papineni et al., 2002](https://aclanthology.org/P02-1040/)) and chrF++ ([Popović, 2015](https://aclanthology.org/W15-3049/)).
</details>
<details open>
<summary>English → Norwegian Bokmål</summary>
|Model|0-shot (BLEU/chrF++)|1-shot (BLEU/chrF++)|5-shot (BLEU/chrF++)|
|---|---|---|---|
|NorMistral-7b-warm|**55.8**/**70.7**|**56.7**/**71.5**|57.7/72.4|
|NorMistral-7b-scratch|46.4/62.9|50.4/66.3|52.1/67.6|
|NorBLOOM-7b|37.1/53.6|50.1/65.8|52.0/67.6|
|NB-GPT-J|8.6/39.1|35.9/64.5|47.2/68.7|
|GPT-Sw3-6.7B|21.8/55.2|54.5/69.6|**58.6**/**73.2**|
|GPT-Sw3-6.7B-v2|20.6/53.2|51.2/66.6|58.4/73.0|
|Falcon-7B|19.1/40.1|20.6/41.8|22.1/43.6|
|Mistral-7B-v0.1|32.5/51.9|35.4/55.1|36.3/56.0|
</details>
<details open>
<summary>English → Norwegian Nynorsk</summary>
|Model|0-shot (BLEU/chrF++)|1-shot (BLEU/chrF++)|5-shot (BLEU/chrF++)|
|---|---|---|---|
|NorMistral-7b-warm|**43.6**/**62.0**|**44.2**/**63.2**|44.3/**63.7**|
|NorMistral-7b-scratch|38.0/56.9|39.2/57.9|40.7/59.3|
|NorBLOOM-7b|35.6/54.7|36.6/56.3|38.1/57.4|
|NB-GPT-J|1.7/14.7|6.3/34.1|35.2/60.4|
|GPT-Sw3-6.7B|13.4/44.3|43.6/62.5|**44.5**/63.5|
|GPT-Sw3-6.7B-v2|14.8/45.5|43.7/62.3|44.0/63.6|
|Falcon-7B|6.4/28.6|8.3/30.5|9.3/32.1|
|Mistral-7B-v0.1|11.6/35.7|13.5/38.7|15.0/40.0|
</details>
<details open>
<summary>Norwegian Bokmål → English</summary>
|Model|0-shot (BLEU/chrF++)|1-shot (BLEU/chrF++)|5-shot (BLEU/chrF++)|
|---|---|---|---|
|NorMistral-7b-warm|**56.7**/**70.6**|**57.7**/**71.7**|**58.5**/**72.2**|
|NorMistral-7b-scratch|48.1/62.9|51.5/66.6|52.6/67.6|
|NorBLOOM-7b|46.0/61.5|51.3/66.7|51.7/66.9|
|NB-GPT-J|23.9/55.3|32.3/63.1|48.5/68.7|
|GPT-Sw3-6.7B|47.9/67.8|52.4/70.6|50.0/70.7|
|GPT-Sw3-6.7B-v2|38.8/59.6|49.0/68.6|50.7/70.6|
|Falcon-7B|42.4/58.5|47.3/62.3|48.6/63.3|
|Mistral-7B-v0.1|53.8/68.2|54.6/69.0|56.9/70.7|
</details>
<details open>
<summary>Norwegian Nynorsk → English</summary>
|Model|0-shot (BLEU/chrF++)|1-shot (BLEU/chrF++)|5-shot (BLEU/chrF++)|
|---|---|---|---|
|NorMistral-7b-warm|**55.1**/**68.4**|**55.5**/**69.5**|56.0/69.8|
|NorMistral-7b-scratch|47.1/61.9|49.4/64.2|52.3/66.2|
|NorBLOOM-7b|45.0/59.3|48.3/64.0|49.0/64.7|
|NB-GPT-J|2.9/19.5|10.1/41.0|44.4/66.9|
|GPT-Sw3-6.7B|47.8/66.2|49.1/68.1|49.6/69.4|
|GPT-Sw3-6.7B-v2|46.3/67.5|48.9/69.3|**58.2**/**72.8**|
|Falcon-7B|21.6/40.6|31.7/47.4|36.6/57.1|
|Mistral-7B-v0.1|40.7/57.1|46.2/60.7|49.9/63.8|
</details>
<details open>
<summary>Norwegian Bokmål → Norwegian Nynorsk</summary>
|Model|0-shot (BLEU/chrF++)|1-shot (BLEU/chrF++)|5-shot (BLEU/chrF++)|
|---|---|---|---|
|NorMistral-7b-warm|**75.8**/**87.5**|74.0/**86.9**|75.3/87.5|
|NorMistral-7b-scratch|38.0/56.9|39.2/57.9|40.7/59.3|
|NorBLOOM-7b|71.5/84.4|70.1/84.1|71.9/85.1|
|NB-GPT-J|6.6/35.5|9.6/41.0|26.0/64.7|
|GPT-Sw3-6.7B|63.6/82.8|74.7/86.0|75.8/86.9|
|GPT-Sw3-6.7B-v2|57.5/81.1|**75.3**/86.7|**76.7**/**87.6**|
|Falcon-7B|28.7/59.2|29.8/60.8|32.1/62.3|
|Mistral-7B-v0.1|32.0/62.2|32.9/62.6|35.2/63.9|
</details>
<details open>
<summary>Norwegian Nynorsk → Norwegian Bokmål</summary>
|Model|0-shot (BLEU/chrF++)|1-shot (BLEU/chrF++)|5-shot (BLEU/chrF++)|
|---|---|---|---|
|NorMistral-7b-warm|**88.1**/**93.6**|**89.2**/**94.3**|**89.3**/**94.6**|
|NorMistral-7b-scratch|85.1/91.4|86.6/92.4|87.4/93.0|
|NorBLOOM-7b|78.7/88.5|84.2/90.7|87.4/93.0|
|NB-GPT-J|2.7/18.5|6.9/35.6|52.9/84.3|
|GPT-Sw3-6.7B|652.3/82.4|86.1/92.5|87.8/93.6|
|GPT-Sw3-6.7B-v2|72.0/88.6|86.1/92.5|88.2/93.9|
|Falcon-7B|36.7/61.6|38.3/63.5|45.8/68.1|
|Mistral-7B-v0.1|57.0/74.8|59.9/77.5|62.6/79.1|
</details>
_____
## Hardware and Software
**Training Factors:** The models were pretrained using the Megatron-DeepSpeed library on [the LUMI cluster in Finland](https://lumi-supercomputer.eu/).
**Carbon Footprint:** Pretraining one model took approximately 70k GPU hours of computation on AMD MI250X GPUs (assuming 2 GPUs per one AMD MI250X device), each of which draws 500W.
LUMI is [one of the most eco-efficient data centers in the world](https://www.lumi-supercomputer.eu/sustainable-future/), and its energy consumption is covered 100% with renewable electricity.
_____
## Example usage
Let's try to use this model for English-to-Norwegian machine translation using simple zero-shot prompting:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# First, we will have to import the tokenizer and the language model
tokenizer = AutoTokenizer.from_pretrained("norallm/normistral-7b-warm")
model = AutoModelForCausalLM.from_pretrained("norallm/normistral-7b-warm").cuda().eval()
# Now we will define the zero-shot prompt template
prompt = """Engelsk: {0}
Bokmål:"""
# A function that will take care of generating the output
@torch.no_grad()
def generate(text):
text = prompt.format(text)
input_ids = tokenizer(text, return_tensors='pt').input_ids.cuda()
prediction = model.generate(
input_ids,
max_new_tokens=64,
do_sample=False,
eos_token_id=tokenizer('\n').input_ids
)
return tokenizer.decode(prediction[0, input_ids.size(1):]).strip()
# Now you can simply call the generate function with an English text you want to translate:
generate("I'm super excited about this Norwegian NORA model! Can it translate these sentences?")
# > this should output: 'Jeg er super spent på denne norske NORA modellen! Kan den oversette disse setningene?'
```
## Example usage on a GPU with ~16GB VRAM (try for yourself [in Google Colab](https://colab.research.google.com/drive/1AQgJ8lN-SNOqkUKj4xpQI5rr0R7V2Xzy?usp=sharing))
Install bitsandbytes if you want to load in 8bit
```bash
pip install bitsandbytes
pip install accelerate
```
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(
"norallm/normistral-7b-warm"
)
# This setup needs about 8gb VRAM
# Setting `load_in_8bit=False` -> 15gb VRAM
# Using `torch.float32` and `load_in_8bit=False` -> 21gb VRAM
model = AutoModelForCausalLM.from_pretrained(
"norallm/normistral-7b-warm",
device_map='auto',
load_in_8bit=True,
torch_dtype=torch.bfloat16
)
```
_____
## Quantization
### Provided files
| Name | Quant method | Bits Per Weight | Size | Max RAM/VRAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [normistral-7b-warm-Q3_K_M.gguf](https://huggingface.co/norallm/normistral-7b-warm/blob/main/normistral-7b-warm-Q3_K_M.gguf) | Q3_K_M | 3.89 | 3.28 GB| 5.37 GB | very small, high loss of quality |
| [normistral-7b-warm-Q4_K_M.gguf](https://huggingface.co/norallm/normistral-7b-warm/blob/main/normistral-7b-warm-Q4_K_M.gguf) | Q4_K_M | 4.83 | 4.07 GB| 6.16 GB | medium, balanced quality |
| [normistral-7b-warm-Q5_K_M.gguf](https://huggingface.co/norallm/normistral-7b-warm/blob/main/normistral-7b-warm-Q5_K_M.gguf) | Q5_K_M | 5.67 | 4.78 GB| 6.87 GB | large, very low quality loss |
| [normistral-7b-warm-Q6_K.gguf](https://huggingface.co/norallm/normistral-7b-warm/blob/main/normistral-7b-warm-Q6_K.gguf) | Q6_K | 6.56 | 5.54 GB| 7.63 GB | very large, extremely low quality loss |
| [normistral-7b-warm-Q8_0.gguf](https://huggingface.co/norallm/normistral-7b-warm/blob/main/normistral-7b-warm-Q8_0.gguf) | Q8_0 | 8.50 | 7.17 GB| 9.26 GB | very large, extremely low quality loss |
### How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) for example.
#### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://llama-cpp-python.readthedocs.io/en/latest/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base llama-ccp-python with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Directly from huggingface-hub (requires huggingface-hub to be installed)
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama.from_pretrained(
repo_id="norallm/normistral-7b-warm", # HuggingFace repository containing the GGUF files.
filename="*Q4_K_M.gguf", # suffix of the filename containing the level of quantization.
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Engelsk: Hello everyone! I'm a language model, how are you doing today?\nBokmål:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token
echo=True, # Whether to echo the prompt
temperature=0.3 # Temperature to set, for Q3_K_M, Q4_K_M, Q5_K_M, and Q6_0 it is recommended to set it relatively low.
)
``` |
yentinglin/Taiwan-LLM-8x7B-DPO | yentinglin | 2024-02-08T07:06:23Z | 722 | 18 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"zh",
"arxiv:2311.17487",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-07T11:32:49Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
license: apache-2.0
language:
- zh
widget:
- text: >-
A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user's
questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Acknowledge the license to accept the repository.
extra_gated_prompt: Please contact the author for access.
extra_gated_button_content: Acknowledge license 同意以上內容
extra_gated_fields:
Name: text
Mail: text
Organization: text
Country: text
Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author: checkbox
---

# 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
# Model Card for Taiwan LLM 8x7B-DPO
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
## Model description
- **Model type:** A 8x7B parameter Mixtral MoE model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw)
- **Finetuned from model:** [yentinglin/Taiwan-LLM-MoE-alpha](https://huggingface.co/yentinglin/Taiwan-LLM-MoE-alpha)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/MiuLab/Taiwan-LLaMa
- **Demo:** https://twllm.com/
## Performance
Checkout leaderboard in [Tw Chatbot Arena](https://arena.twllm.com/)
TMMLUS+ score:
- yentinglin/Taiwan-LLM-MoE-alpha: 43.93
- yentinglin/Taiwan-LLM-8x7B-DPO: TBD
## Intended uses
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers>=4.34
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="yentinglin/Taiwan-LLM-8x7B-DPO", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "你是一個人工智慧助理",
},
{"role": "user", "content": "東北季風如何影響台灣氣候?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Citation
If you find Taiwan LLM useful in your work, please cite it with:
```
@misc{lin2023taiwan,
title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model},
author={Yen-Ting Lin and Yun-Nung Chen},
year={2023},
eprint={2311.17487},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Acknowledgement
Ubitus provides valuable compute resources for the project. |
sonthenguyen/OpenHermes-2.5-Mistral-7B-mt-bench-DPO-reversed_corrupted | sonthenguyen | 2024-02-15T21:28:00Z | 722 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-15T06:01:10Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fzzhang/toten_gsm8k_merged_s | fzzhang | 2024-02-17T05:06:29Z | 722 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:gsm8k",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T04:57:42Z | ---
library_name: transformers
license: apache-2.0
datasets:
- gsm8k
language:
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cjpais/llava-v1.6-vicuna-7b-gguf | cjpais | 2024-03-07T00:39:24Z | 722 | 3 | null | [
"gguf",
"llava",
"image-text-to-text",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2024-02-17T16:43:00Z | ---
license: apache-2.0
tags:
- llava
pipeline_tag: image-text-to-text
---
# GGUF Quantized LLaVA 1.6 Vicuna 7B
Updated quants and projector from [PR #5267](https://github.com/ggerganov/llama.cpp/pull/5267)
## Provided files
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [llava-v1.6-vicuna-7b.Q3_K_XS.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-7b-gguf/blob/main/llava-v1.6-vicuna-7b.Q3_K_XS.gguf) | Q3_K_XS | 3 | 2.77 GB| very small, high quality loss |
| [llava-v1.6-vicuna-7b.Q3_K_M.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-7b-gguf/blob/main/llava-v1.6-vicuna-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.3 GB| very small, high quality loss |
| [llava-v1.6-vicuna-7b.Q4_K_M.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-7b-gguf/blob/main/llava-v1.6-vicuna-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| medium, balanced quality - recommended |
| [llava-v1.6-vicuna-7b.Q5_K_S.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-7b-gguf/blob/main/llava-v1.6-vicuna-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| large, low quality loss - recommended |
| [llava-v1.6-vicuna-7b.Q5_K_M.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-7b-gguf/blob/main/llava-v1.6-vicuna-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| large, very low quality loss - recommended |
| [llava-v1.6-vicuna-7b.Q6_K.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-7b-gguf/blob/main/llava-v1.6-vicuna-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| very large, extremely low quality loss |
| [llava-v1.6-vicuna-7b.Q8_0.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-7b-gguf/blob/main/llava-v1.6-vicuna-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| very large, extremely low quality loss - not recommended |
<br>
<br>
# ORIGINAL LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
Base LLM: [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5)
**Model date:**
LLaVA-v1.6-Vicuna-7B was trained in December 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 50K GPT-4V data mixture.
- 40K ShareGPT data.
## Evaluation dataset
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs. |
Yuma42/KangalKhan-ShinyEmerald-7B | Yuma42 | 2024-03-05T10:54:59Z | 722 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Yuma42/KangalKhan-Sapphire-7B",
"Yuma42/KangalKhan-Ruby-7B-Fixed",
"conversational",
"en",
"base_model:Yuma42/KangalKhan-Sapphire-7B",
"base_model:Yuma42/KangalKhan-Ruby-7B-Fixed",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T17:15:07Z | ---
language:
- en
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Yuma42/KangalKhan-Sapphire-7B
- Yuma42/KangalKhan-Ruby-7B-Fixed
base_model:
- Yuma42/KangalKhan-Sapphire-7B
- Yuma42/KangalKhan-Ruby-7B-Fixed
model-index:
- name: KangalKhan-ShinyEmerald-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-ShinyEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.37
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-ShinyEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.36
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-ShinyEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.65
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-ShinyEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-ShinyEmerald-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-ShinyEmerald-7B
name: Open LLM Leaderboard
---
# KangalKhan-ShinyEmerald-7B
KangalKhan-ShinyEmerald-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Yuma42/KangalKhan-Sapphire-7B](https://huggingface.co/Yuma42/KangalKhan-Sapphire-7B)
* [Yuma42/KangalKhan-Ruby-7B-Fixed](https://huggingface.co/Yuma42/KangalKhan-Ruby-7B-Fixed)
## 🧩 Configuration
```yaml
models:
- model: teknium/OpenHermes-2.5-Mistral-7B
# no parameters necessary for base model
- model: Yuma42/KangalKhan-Sapphire-7B
parameters:
density: 0.6
weight: 0.5
- model: Yuma42/KangalKhan-Ruby-7B-Fixed
parameters:
density: 0.6
weight: 0.5
merge_method: ties
base_model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
normalize: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-ShinyEmerald-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Yuma42__KangalKhan-ShinyEmerald-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.63|
|AI2 Reasoning Challenge (25-Shot)|66.21|
|HellaSwag (10-Shot) |85.37|
|MMLU (5-Shot) |63.36|
|TruthfulQA (0-shot) |56.65|
|Winogrande (5-shot) |78.37|
|GSM8k (5-shot) |61.79|
|
InnerI/InnerI-AI-sn6-7B-slerp | InnerI | 2024-03-09T00:49:53Z | 722 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"tomaszki/nous-thirty",
"InnerI/A-I-0xtom-7B-slerp",
"conversational",
"base_model:tomaszki/nous-thirty",
"base_model:InnerI/A-I-0xtom-7B-slerp",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T18:19:49Z | ---
tags:
- merge
- mergekit
- lazymergekit
- tomaszki/nous-thirty
- InnerI/A-I-0xtom-7B-slerp
base_model:
- tomaszki/nous-thirty
- InnerI/A-I-0xtom-7B-slerp
license: llama2
---
# InnerI-AI-sn6-7B-slerp
InnerI-AI-sn6-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [tomaszki/nous-thirty](https://huggingface.co/tomaszki/nous-thirty)
* [InnerI/A-I-0xtom-7B-slerp](https://huggingface.co/InnerI/A-I-0xtom-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: tomaszki/nous-thirty
layer_range: [0, 32]
- model: InnerI/A-I-0xtom-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: tomaszki/nous-thirty
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "InnerI/InnerI-AI-sn6-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Yuma42/KangalKhan-ShatteredRuby-7B | Yuma42 | 2024-03-05T10:56:14Z | 722 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Yuma42/KangalKhan-Ruby-7B-Fixed",
"Yuma42/KangalKhan-RawEmerald-7B",
"conversational",
"en",
"base_model:Yuma42/KangalKhan-Ruby-7B-Fixed",
"base_model:Yuma42/KangalKhan-RawEmerald-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-18T14:13:17Z | ---
language:
- en
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Yuma42/KangalKhan-Ruby-7B-Fixed
- Yuma42/KangalKhan-RawEmerald-7B
base_model:
- Yuma42/KangalKhan-Ruby-7B-Fixed
- Yuma42/KangalKhan-RawEmerald-7B
model-index:
- name: KangalKhan-ShatteredRuby-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-ShatteredRuby-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-ShatteredRuby-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-ShatteredRuby-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.99
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-ShatteredRuby-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-ShatteredRuby-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.71
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-ShatteredRuby-7B
name: Open LLM Leaderboard
---
# KangalKhan-ShatteredRuby-7B
KangalKhan-ShatteredRuby-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Yuma42/KangalKhan-Ruby-7B-Fixed](https://huggingface.co/Yuma42/KangalKhan-Ruby-7B-Fixed)
* [Yuma42/KangalKhan-RawEmerald-7B](https://huggingface.co/Yuma42/KangalKhan-RawEmerald-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Yuma42/KangalKhan-Ruby-7B-Fixed
layer_range: [0, 32]
- model: Yuma42/KangalKhan-RawEmerald-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Yuma42/KangalKhan-Ruby-7B-Fixed
parameters:
t:
- filter: self_attn
value: [0.97, 0.75, 0.35, 0.55, 0.1]
- filter: mlp
value: [0.03, 0.25, 0.65, 0.45, 0.9]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-ShatteredRuby-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Yuma42__KangalKhan-ShatteredRuby-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.70|
|AI2 Reasoning Challenge (25-Shot)|66.21|
|HellaSwag (10-Shot) |85.38|
|MMLU (5-Shot) |63.29|
|TruthfulQA (0-shot) |56.99|
|Winogrande (5-shot) |78.61|
|GSM8k (5-shot) |61.71|
|
BarraHome/Mistroll-7B-v0.1-4bit | BarraHome | 2024-02-21T07:09:40Z | 722 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-21T07:04:44Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** BarraHome
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vishnukv/WestSeverusJaskier | vishnukv | 2024-03-05T03:34:26Z | 722 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:PetroGPT/WestSeverus-7B-DPO",
"base_model:bardsai/jaskier-7b-dpo-v6.1",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-28T17:29:21Z | ---
license: mit
library_name: transformers
tags:
- mergekit
- merge
base_model:
- PetroGPT/WestSeverus-7B-DPO
- bardsai/jaskier-7b-dpo-v6.1
model-index:
- name: WestSeverusJaskier
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vishnukv/WestSeverusJaskier
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vishnukv/WestSeverusJaskier
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vishnukv/WestSeverusJaskier
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 73.18
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vishnukv/WestSeverusJaskier
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.87
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vishnukv/WestSeverusJaskier
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vishnukv/WestSeverusJaskier
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [PetroGPT/WestSeverus-7B-DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO)
* [bardsai/jaskier-7b-dpo-v6.1](https://huggingface.co/bardsai/jaskier-7b-dpo-v6.1)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vishnukv__WestSeverusJaskier)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.67|
|AI2 Reasoning Challenge (25-Shot)|71.76|
|HellaSwag (10-Shot) |88.16|
|MMLU (5-Shot) |64.94|
|TruthfulQA (0-shot) |73.18|
|Winogrande (5-shot) |82.87|
|GSM8k (5-shot) |73.09|
|
Kukedlc/NeuralFusion-7b-Dare-Ties | Kukedlc | 2024-03-04T14:36:08Z | 722 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralMaxime-7B-slerp",
"Kukedlc/Fasciculus-Arcuatus-7B-slerp",
"Kukedlc/NeoCortex-7B-slerp",
"base_model:Kukedlc/NeuralMaxime-7B-slerp",
"base_model:Kukedlc/Fasciculus-Arcuatus-7B-slerp",
"base_model:Kukedlc/NeoCortex-7B-slerp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-29T21:09:16Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralMaxime-7B-slerp
- Kukedlc/Fasciculus-Arcuatus-7B-slerp
- Kukedlc/NeoCortex-7B-slerp
base_model:
- Kukedlc/NeuralMaxime-7B-slerp
- Kukedlc/Fasciculus-Arcuatus-7B-slerp
- Kukedlc/NeoCortex-7B-slerp
model-index:
- name: NeuralFusion-7b-Dare-Ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.21
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralFusion-7b-Dare-Ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralFusion-7b-Dare-Ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralFusion-7b-Dare-Ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 73.32
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralFusion-7b-Dare-Ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralFusion-7b-Dare-Ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.83
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralFusion-7b-Dare-Ties
name: Open LLM Leaderboard
---
# NeuralFusion-7b-Dare-Ties
NeuralFusion-7b-Dare-Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuralMaxime-7B-slerp](https://huggingface.co/Kukedlc/NeuralMaxime-7B-slerp)
* [Kukedlc/Fasciculus-Arcuatus-7B-slerp](https://huggingface.co/Kukedlc/Fasciculus-Arcuatus-7B-slerp)
* [Kukedlc/NeoCortex-7B-slerp](https://huggingface.co/Kukedlc/NeoCortex-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: mlabonne/Monarch-7B
# no parameters necessary for base model
- model: Kukedlc/NeuralMaxime-7B-slerp
parameters:
density: 0.65
weight: 0.36
- model: Kukedlc/Fasciculus-Arcuatus-7B-slerp
parameters:
density: 0.6
weight: 0.34
- model: Kukedlc/NeoCortex-7B-slerp
parameters:
density: 0.6
weight: 0.3
merge_method: dare_ties
base_model: mlabonne/Monarch-7B
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralFusion-7b-Dare-Ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralFusion-7b-Dare-Ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.94|
|AI2 Reasoning Challenge (25-Shot)|73.21|
|HellaSwag (10-Shot) |88.96|
|MMLU (5-Shot) |64.77|
|TruthfulQA (0-shot) |73.32|
|Winogrande (5-shot) |85.56|
|GSM8k (5-shot) |69.83|
|
Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1 | Eric111 | 2024-03-09T21:21:11Z | 722 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:fblgit/UNA-TheBeagle-7b-v1",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-07T20:07:53Z | ---
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- fblgit/UNA-TheBeagle-7b-v1
license: cc-by-nc-nd-4.0
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: fblgit/UNA-TheBeagle-7b-v1
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
SourAsslips/Gilbert | SourAsslips | 2024-03-10T02:35:24Z | 722 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T02:33:19Z | Entry not found |
Sao10K/Shiki-m7 | Sao10K | 2024-03-13T15:01:49Z | 722 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-13T02:51:41Z | ---
license: cc-by-nc-4.0
language:
- en
---
experimental model still testing ymmv
filler text:
Это было, когда улыбался
Только мертвый, спокойствию рад
а то, что случилось,
Пусть черные сукна покроют,
И пусть унесут фонари...
Ночь.
Уже безумие крылом,
Души накрыло половину,
И поит огненным вином
И манит в черную долину.
И понял я, что ему
Должен я уступить победу,
Прислушиваясь к своему
Уже как бы чужому бреду.
И только пыльные цветы,
И звон кадильный, и следы
Куда-то в никуда.
И прямо мне в глаза глядит
И скорой гибелью грозит
Огромная звезда.
Перед этим горем гнутся горы,
Не течет великая река.
Уже безумие крылом,
Души накрыло половину,
И поит огненным вином
И манит в черную долину.
И понял я, что ему
Должен я уступить победу,
Прислушиваясь к своему
Уже как бы чужому бреду. |
mlabonne/FrankenMonarch-7B | mlabonne | 2024-03-20T19:07:10Z | 722 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"base_model:mlabonne/AlphaMonarch-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-19T10:35:55Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- mlabonne/AlphaMonarch-7B
- mlabonne/AlphaMonarch-7B
- mlabonne/AlphaMonarch-7B
- mlabonne/AlphaMonarch-7B
- mlabonne/AlphaMonarch-7B
---
# FrankenMonarch-7B
FrankenMonarch-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
# Quantized versions :
- [**GGUF**](https://huggingface.co/seyf1elislam/FrankenMonarch-7B-GGUF)
## 🧩 Configuration
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- model: mlabonne/AlphaMonarch-7B
layer_range: [0,9]
- sources:
- model: mlabonne/AlphaMonarch-7B
layer_range: [5,14]
- sources:
- model: mlabonne/AlphaMonarch-7B
layer_range: [10,19]
- sources:
- model: mlabonne/AlphaMonarch-7B
layer_range: [15,24]
- sources:
- model: mlabonne/AlphaMonarch-7B
layer_range: [20,32]
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/FrankenMonarch-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp | weezywitasneezy | 2024-04-09T17:04:02Z | 722 | 1 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp",
"weezywitasneezy/OxytocinErosEngineeringF2-7B-slerp",
"ChaoticNeutrals/Eris_Remix_7B",
"Virt-io/Erebus-Holodeck-7B",
"jeiku/Eros_Prodigadigm_7B",
"Epiculous/Mika-7B",
"base_model:weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp",
"base_model:weezywitasneezy/OxytocinErosEngineeringF2-7B-slerp",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T20:58:30Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp
- weezywitasneezy/OxytocinErosEngineeringF2-7B-slerp
- ChaoticNeutrals/Eris_Remix_7B
- Virt-io/Erebus-Holodeck-7B
- jeiku/Eros_Prodigadigm_7B
- Epiculous/Mika-7B
base_model:
- weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp
- weezywitasneezy/OxytocinErosEngineeringF2-7B-slerp
model-index:
- name: OxytocinErosEngineeringFX-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 65.25
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.39
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp
name: Open LLM Leaderboard
---
# OxytocinErosEngineeringFX-7B-slerp
<img src="https://cdn-uploads.huggingface.co/production/uploads/632b22e66cb20ba0ae82bf06/iNmYhNFQJ-fdJhuzjvRaO.png"
width="512"
height="512" />
This is the combination of 4 x Mistral 7b (v0.2?) models as follows:
* [ChaoticNeutrals/Eris_Remix_7B](https://huggingface.co/ChaoticNeutrals/Eris_Remix_7B)
* [Virt-io/Erebus-Holodeck-7B](https://huggingface.co/Virt-io/Erebus-Holodeck-7B)
* [jeiku/Eros_Prodigadigm_7B](https://huggingface.co/jeiku/Eros_Prodigadigm_7B)
* [Epiculous/Mika-7B](https://huggingface.co/Epiculous/Mika-7B)
OxytocinErosEngineeringFX-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp](https://huggingface.co/weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp)
* [weezywitasneezy/OxytocinErosEngineeringF2-7B-slerp](https://huggingface.co/weezywitasneezy/OxytocinErosEngineeringF2-7B-slerp)
|---------------------------------|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_weezywitasneezy__OxytocinErosEngineeringFX-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.28|
|AI2 Reasoning Challenge (25-Shot)|66.98|
|HellaSwag (10-Shot) |86.48|
|MMLU (5-Shot) |64.14|
|TruthfulQA (0-shot) |65.25|
|Winogrande (5-shot) |81.45|
|GSM8k (5-shot) |57.39|
## 🧩 Configuration
```yaml
slices:
- sources:
- model: weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp
layer_range: [0, 32]
- model: weezywitasneezy/OxytocinErosEngineeringF2-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: weezywitasneezy/OxytocinErosEngineeringF1-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "weezywitasneezy/OxytocinErosEngineeringFX-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Gille/StrangeMerges_45-7B-dare_ties | Gille | 2024-03-25T18:21:34Z | 722 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Q-bert/MetaMath-Cybertron-Starling",
"ozayezerceli/BetterSaul-7B-slerp",
"chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"base_model:Q-bert/MetaMath-Cybertron-Starling",
"base_model:ozayezerceli/BetterSaul-7B-slerp",
"base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T18:05:51Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Q-bert/MetaMath-Cybertron-Starling
- ozayezerceli/BetterSaul-7B-slerp
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2
base_model:
- Q-bert/MetaMath-Cybertron-Starling
- ozayezerceli/BetterSaul-7B-slerp
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2
---
# StrangeMerges_45-7B-dare_ties
StrangeMerges_45-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
* [ozayezerceli/BetterSaul-7B-slerp](https://huggingface.co/ozayezerceli/BetterSaul-7B-slerp)
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO)
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2)
## 🧩 Configuration
```yaml
models:
- model: Q-bert/MetaMath-Cybertron-Starling
parameters:
weight: 0.3
density: 0.53
- model: ozayezerceli/BetterSaul-7B-slerp
parameters:
weight: 0.2
density: 0.53
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
parameters:
weight: 0.4
density: 0.53
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2
parameters:
weight: 0.1
density: 0.53
base_model: Gille/StrangeMerges_44-7B-dare_ties
merge_method: dare_ties
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_45-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
netcat420/MFANNv0.2 | netcat420 | 2024-04-03T21:44:26Z | 722 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:netcat420/MFANN",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-29T23:52:49Z | ---
license: apache-2.0
library_name: transformers
datasets:
- netcat420/MFANN
---
netcat420/MFANNv0.2
64.47 <- average
62.88 <- ARC
83.85 <- HellaSwag
60.11 <- MMLU
68.94 <- TruthfulQA
74.03 <- Winogrande
37 <- GSM8K |
CampAIgn/Phi-3-mini_16bit | CampAIgn | 2024-05-29T21:15:03Z | 722 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"phi3",
"NLP",
"conversational",
"en",
"fr",
"dataset:CampAIgn/DDD_French_version",
"dataset:CampAIgn/Steam_games_in_french",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T22:45:54Z | ---
language:
- en
- fr
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- phi3
- NLP
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
datasets:
- CampAIgn/DDD_French_version
- CampAIgn/Steam_games_in_french
---
# Uploaded model
- **Developed by:** CampAIgn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
mradermacher/RP-Stew-v2.5-34B-GGUF | mradermacher | 2024-06-09T16:06:53Z | 722 | 6 | transformers | [
"transformers",
"gguf",
"en",
"base_model:MarinaraSpaghetti/RP-Stew-v2.5-34B",
"endpoints_compatible",
"region:us"
] | null | 2024-06-08T21:11:14Z | ---
base_model: MarinaraSpaghetti/RP-Stew-v2.5-34B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MarinaraSpaghetti/RP-Stew-v2.5-34B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.IQ3_M.gguf) | IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RP-Stew-v2.5-34B-GGUF/resolve/main/RP-Stew-v2.5-34B.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
stablediffusionapi/anime-khsjaj | stablediffusionapi | 2024-06-19T08:15:25Z | 722 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-06-19T08:10:55Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Anime khsjaj API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "anime-khsjaj"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/anime-khsjaj)
Model link: [View model](https://modelslab.com/models/anime-khsjaj)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "anime-khsjaj",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
rexoscare/autocomplete-model | rexoscare | 2024-06-20T18:19:38Z | 722 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-20T18:16:36Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** rexoscare
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Infinirc/Infinirc-Llama3-8B-4bit-AWQ-GEMM-Beta | Infinirc | 2024-06-28T12:35:39Z | 722 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"zhtw",
"conversational",
"zh",
"en",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-06-22T09:38:50Z | ---
license: llama3
language:
- zh
- en
library_name: transformers
tags:
- zhtw
--- |
kamalkraj/BioSimCSE-BioLinkBERT-BASE | kamalkraj | 2023-02-02T20:21:03Z | 721 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2022-12-05T07:57:59Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
widget:
- source_sentence: "The up-regulation of miR-146a was also detected in cervical cancer tissues."
sentences: ["The expression of miR-146a has been found to be up-regulated in cervical cancer.", "Only concomitant ablation of ERK1 and ERK2 impairs tumor growth."]
example_title: "BioNLP Example"
---
# kamalkraj/BioSimCSE-BioLinkBERT-BASE
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('kamalkraj/BioSimCSE-BioLinkBERT-BASE')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('kamalkraj/BioSimCSE-BioLinkBERT-BASE')
model = AutoModel.from_pretrained('kamalkraj/BioSimCSE-BioLinkBERT-BASE')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=kamalkraj/BioSimCSE-BioLinkBERT-BASE)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7708 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 771,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
```bibtex
@inproceedings{kanakarajan-etal-2022-biosimcse,
title = "{B}io{S}im{CSE}: {B}io{M}edical Sentence Embeddings using Contrastive learning",
author = "Kanakarajan, Kamal raj and
Kundumani, Bhuvana and
Abraham, Abhijith and
Sankarasubbu, Malaikannan",
booktitle = "Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.louhi-1.10",
pages = "81--86",
abstract = "Sentence embeddings in the form of fixed-size vectors that capture the information in the sentence as well as the context are critical components of Natural Language Processing systems. With transformer model based sentence encoders outperforming the other sentence embedding methods in the general domain, we explore the transformer based architectures to generate dense sentence embeddings in the biomedical domain. In this work, we present BioSimCSE, where we train sentence embeddings with domain specific transformer based models with biomedical texts. We assess our model{'}s performance with zero-shot and fine-tuned settings on Semantic Textual Similarity (STS) and Recognizing Question Entailment (RQE) tasks. Our BioSimCSE model using BioLinkBERT achieves state of the art (SOTA) performance on both tasks.",
}
```
|
timm/xcit_tiny_12_p8_224.fb_in1k | timm | 2024-02-10T23:44:06Z | 721 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.09681",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-04-13T02:30:58Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for xcit_tiny_12_p8_224.fb_in1k
A XCiT (Cross-Covariance Image Transformer) image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 6.7
- GMACs: 4.8
- Activations (M): 23.6
- Image size: 224 x 224
- **Papers:**
- XCiT: Cross-Covariance Image Transformers: https://arxiv.org/abs/2106.09681
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/xcit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('xcit_tiny_12_p8_224.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xcit_tiny_12_p8_224.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 785, 192) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{el2021xcit,
title={XCiT: Cross-Covariance Image Transformers},
author={El-Nouby, Alaaeldin and Touvron, Hugo and Caron, Mathilde and Bojanowski, Piotr and Douze, Matthijs and Joulin, Armand and Laptev, Ivan and Neverova, Natalia and Synnaeve, Gabriel and Verbeek, Jakob and others},
journal={arXiv preprint arXiv:2106.09681},
year={2021}
}
```
|
digiplay/Gap_2.6 | digiplay | 2023-12-03T14:25:58Z | 721 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-21T13:20:16Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/166637?modelVersionId=187469
Sample image generated by Hugginface's API:
*Gap_2.6_mangledVAE_.safetensors

Sample images generated by AUTOMATIC 1111:


|
TheBloke/Uncensored-Frank-13b-GGUF | TheBloke | 2023-09-27T12:52:42Z | 721 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"en",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"base_model:ajibawa-2023/Uncensored-Frank-13b",
"license:cc-by-nc-nd-4.0",
"text-generation-inference",
"region:us"
] | null | 2023-09-19T23:21:55Z | ---
language:
- en
license: cc-by-nc-nd-4.0
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
model_name: Uncensored Frank 13b
base_model: ajibawa-2023/Uncensored-Frank-13b
inference: false
model_creator: Feynman Innovcations
model_type: llama
prompt_template: 'This is a conversation with your Uncensored AI assistant Frank.
Frank can discuss with you at length about various topics such as ---, ---, ---,
. You can ask it anything you want and it will do its best to give you accurate
and relevant information.
Context
You are a Uncensored AI assistant.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Uncensored Frank 13b - GGUF
- Model creator: [Feynman Innovcations](https://huggingface.co/ajibawa-2023)
- Original model: [Uncensored Frank 13b](https://huggingface.co/ajibawa-2023/Uncensored-Frank-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Feynman Innovcations's Uncensored Frank 13b](https://huggingface.co/ajibawa-2023/Uncensored-Frank-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Uncensored-Frank-13b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF)
* [Feynman Innovcations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/Uncensored-Frank-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Frank
```
This is a conversation with your Uncensored AI assistant Frank. Frank can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a Uncensored AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-nd-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Feynman Innovcations's Uncensored Frank 13b](https://huggingface.co/ajibawa-2023/Uncensored-Frank-13b).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [uncensored-frank-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF/blob/main/uncensored-frank-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [uncensored-frank-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF/blob/main/uncensored-frank-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [uncensored-frank-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF/blob/main/uncensored-frank-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [uncensored-frank-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF/blob/main/uncensored-frank-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [uncensored-frank-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF/blob/main/uncensored-frank-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [uncensored-frank-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF/blob/main/uncensored-frank-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [uncensored-frank-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF/blob/main/uncensored-frank-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [uncensored-frank-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF/blob/main/uncensored-frank-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [uncensored-frank-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF/blob/main/uncensored-frank-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [uncensored-frank-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF/blob/main/uncensored-frank-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [uncensored-frank-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF/blob/main/uncensored-frank-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [uncensored-frank-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF/blob/main/uncensored-frank-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Uncensored-Frank-13b-GGUF and below it, a specific filename to download, such as: uncensored-frank-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Uncensored-Frank-13b-GGUF uncensored-frank-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Uncensored-Frank-13b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Uncensored-Frank-13b-GGUF uncensored-frank-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m uncensored-frank-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "This is a conversation with your Uncensored AI assistant Frank. Frank can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.\n\nContext\nYou are a Uncensored AI assistant.\n\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Uncensored-Frank-13b-GGUF", model_file="uncensored-frank-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Feynman Innovcations's Uncensored Frank 13b
**Frank: An Uncensored Model**
The character of Frank Costello in "The Departed" is known for his cunning, boldness, and willingness to talk about anything, regardless of societal norms or restrictions.
Frank, An Uncensored model, draws inspiration from these qualities to offer a platform where users can discuss a wide array of topics without the fear of censorship or restrictions.
Frank aims to push boundaries and encourage candid conversations. With Frank you can have unfiltered discussions on a multitude of topics, from politics and controversial issues to personal experiences and sensitive subjects.
It is trained on around 150000 set of conversations. Each set having 10~15 conversations. Base data was obtained from [Eric Hartford](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered).
This data was further refined and fine tuned. Besides this further synthetic conversation (more than 80k) was generated and refined. We will not release this data.
**Warning**
An uncensored model has no or little guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous objects.
Publishing anything this model generates is the same as publishing it yourself. We are not responsible for what you generate using this model.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took around 65 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
**Example Prompt:**
```
This is a conversation with your Uncensored AI assistant Frank. Frank can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a Uncensored AI assistant.
USER: <prompt>
ASSISTANT:
```
<!-- original-model-card end -->
|
TheBloke/med42-70B-GGUF | TheBloke | 2023-10-27T23:04:48Z | 721 | 18 | transformers | [
"transformers",
"gguf",
"llama",
"m42",
"health",
"healthcare",
"clinical-llm",
"text-generation",
"en",
"base_model:m42-health/med42-70b",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-27T21:16:14Z | ---
base_model: m42-health/med42-70b
inference: false
language:
- en
license: other
license_name: med42
model_creator: M42 Health
model_name: Med42 70B
model_type: llama
pipeline_tag: text-generation
prompt_template: '<|system|>: You are a helpful medical assistant created by M42 Health
in the UAE.
<|prompter|>:{prompt}
<|assistant|>:
'
quantized_by: TheBloke
tags:
- m42
- health
- healthcare
- clinical-llm
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Med42 70B - GGUF
- Model creator: [M42 Health](https://huggingface.co/m42-health)
- Original model: [Med42 70B](https://huggingface.co/m42-health/med42-70b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [M42 Health's Med42 70B](https://huggingface.co/m42-health/med42-70b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/med42-70B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/med42-70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/med42-70B-GGUF)
* [M42 Health's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/m42-health/med42-70b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Med42
```
<|system|>: You are a helpful medical assistant created by M42 Health in the UAE.
<|prompter|>:{prompt}
<|assistant|>:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [M42 Health's Med42 70B](https://huggingface.co/m42-health/med42-70b).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [med42-70b.Q2_K.gguf](https://huggingface.co/TheBloke/med42-70B-GGUF/blob/main/med42-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [med42-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/med42-70B-GGUF/blob/main/med42-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [med42-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/med42-70B-GGUF/blob/main/med42-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [med42-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/med42-70B-GGUF/blob/main/med42-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [med42-70b.Q4_0.gguf](https://huggingface.co/TheBloke/med42-70B-GGUF/blob/main/med42-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [med42-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/med42-70B-GGUF/blob/main/med42-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [med42-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/med42-70B-GGUF/blob/main/med42-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [med42-70b.Q5_0.gguf](https://huggingface.co/TheBloke/med42-70B-GGUF/blob/main/med42-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [med42-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/med42-70B-GGUF/blob/main/med42-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [med42-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/med42-70B-GGUF/blob/main/med42-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| med42-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| med42-70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `med42-70b.Q6_K.gguf-split-a`
* `med42-70b.Q6_K.gguf-split-b`
### q8_0
Please download:
* `med42-70b.Q8_0.gguf-split-a`
* `med42-70b.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat med42-70b.Q6_K.gguf-split-* > med42-70b.Q6_K.gguf && rm med42-70b.Q6_K.gguf-split-*
cat med42-70b.Q8_0.gguf-split-* > med42-70b.Q8_0.gguf && rm med42-70b.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B med42-70b.Q6_K.gguf-split-a + med42-70b.Q6_K.gguf-split-b med42-70b.Q6_K.gguf
del med42-70b.Q6_K.gguf-split-a med42-70b.Q6_K.gguf-split-b
COPY /B med42-70b.Q8_0.gguf-split-a + med42-70b.Q8_0.gguf-split-b med42-70b.Q8_0.gguf
del med42-70b.Q8_0.gguf-split-a med42-70b.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/med42-70B-GGUF and below it, a specific filename to download, such as: med42-70b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/med42-70B-GGUF med42-70b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/med42-70B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/med42-70B-GGUF med42-70b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m med42-70b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|system|>: You are a helpful medical assistant created by M42 Health in the UAE.\n<|prompter|>:{prompt}\n<|assistant|>:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/med42-70B-GGUF", model_file="med42-70b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: M42 Health's Med42 70B
# **Med42 - Clinical Large Language Model**
Med42 is an open-access clinical large language model (LLM) developed by M42 to expand access to medical knowledge. Built off LLaMA-2 and comprising 70 billion parameters, this generative AI system provides high-quality answers to medical questions.
## Model Details
*Note: Use of this model is governed by the M42 Health license. In order to download the model weights (and tokenizer), please read the [Med42 License](https://huggingface.co/spaces/m42-health/License) and accept our License by requesting access here.*
Beginning with the base LLaMa-2 model, Med42 was instruction-tuned on a dataset of ~250M tokens compiled from different open-access sources, including medical flashcards, exam questions, and open-domain dialogues.
**Model Developers:** M42 Health AI Team
**Finetuned from model:** Llama-2 - 70B
**Context length:** 4k tokens
**Input:** Text only data
**Output:** Model generates text only
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance.
**License:** A custom license is available [here](https://huggingface.co/spaces/m42-health/License)
**Research Paper:** TBA
## Intended Use
Med42 is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use. Potential use cases include:
- Medical question answering
- Patient record summarization
- Aiding medical diagnosis
- General health Q&A
To get the expected features and performance for the model, a specific formatting needs to be followed, including the `<|system|>`, `<|prompter|>` and `<|assistant|>` tags.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name_or_path = "m42-health/med42-70b"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
prompt = "What are the symptoms of diabetes ?"
prompt_template=f'''
<|system|>: You are a helpful medical assistant created by M42 Health in the UAE.
<|prompter|>:{prompt}
<|assistant|>:
'''
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True,eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, max_new_tokens=512)
print(tokenizer.decode(output[0]))
```
## Hardware and Software
The training process was performed on the Condor Galaxy 1 (CG-1) supercomputer platform.
## Evaluation Results
Med42 achieves achieves competitive performance on various medical benchmarks, including MedQA, MedMCQA, PubMedQA, HeadQA, and Measuring Massive Multitask Language Understanding (MMLU) clinical topics. For all evaluations reported so far, we use [EleutherAI's evaluation harness library](https://github.com/EleutherAI/lm-evaluation-harness) and report zero-shot accuracies (except otherwise stated). We compare the performance with that reported for other models (ClinicalCamel-70B, GPT-3.5, GPT-4.0, Med-PaLM 2).
|Dataset|Med42|ClinicalCamel-70B|GPT-3.5|GPT-4.0|Med-PaLM-2 (5-shot)*|
|---|---|---|---|---|---|
|MMLU Clinical Knowledge|74.3|69.8|69.8|86.0|88.3|
|MMLU College Biology|84.0|79.2|72.2|95.1|94.4|
|MMLU College Medicine|68.8|67.0|61.3|76.9|80.9|
|MMLU Medical Genetics|86.0|69.0|70.0|91.0|90.0|
|MMLU Professional Medicine|79.8|71.3|70.2|93.0|95.2|
|MMLU Anatomy|67.4|62.2|56.3|80.0|77.8|
|MedMCQA|60.9|47.0|50.1|69.5|71.3|
|MedQA|61.5|53.4|50.8|78.9|79.7|
|USMLE Self-Assessment|71.7|-|49.1|83.8|-|
|USMLE Sample Exam|72.0|54.3|56.9|84.3|-|
**We note that 0-shot performance is not reported for Med-PaLM 2. Further details can be found at [https://github.com/m42health/med42](https://github.com/m42health/med42)*.
### Key performance metrics:
- Med42 achieves a 72% accuracy on the US Medical Licensing Examination (USMLE) sample exam, surpassing the prior state of the art among openly available medical LLMs.
- 61.5% on MedQA dataset (compared to 50.8% for GPT-3.5)
- Consistently higher performance on MMLU clinical topics compared to GPT-3.5.
## Limitations & Safe Use
- Med42 is not ready for real clinical use. Extensive human evaluation is undergoing as it is required to ensure safety.
- Potential for generating incorrect or harmful information.
- Risk of perpetuating biases in training data.
Use this model responsibly! Do not rely on it for medical usage without rigorous safety testing.
## Accessing Med42 and Reporting Issues
Please report any software "bug" or other problems through one of the following means:
- Reporting issues with the model: [https://github.com/m42health/med42](https://github.com/m42health/med42)
- Reporting risky content generated by the model, bugs and/or any security concerns: [https://forms.office.com/r/YMJu3kcKat](https://forms.office.com/r/YMJu3kcKat)
- M42’s privacy policy available at [https://m42.ae/privacy-policy/](https://m42.ae/privacy-policy/)
- Reporting violations of the Acceptable Use Policy or unlicensed uses of Med42: <[email protected]>
<!-- original-model-card end -->
|
ncsgobubble/Llama-7B-rollercoaster_v2 | ncsgobubble | 2024-01-23T14:07:38Z | 721 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"meta-llama/Llama-2-7b-chat-hf",
"SuvajitGB/rollercoaster_emotions_v2",
"conversational",
"base_model:SuvajitGB/rollercoaster_emotions_v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-09T09:57:59Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- meta-llama/Llama-2-7b-chat-hf
- SuvajitGB/rollercoaster_emotions_v2
base_model:
- SuvajitGB/rollercoaster_emotions_v2
---
# Llama-7B-rollercoaster_v2
Llama-7B-rollercoaster_v2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
* [SuvajitGB/rollercoaster_emotions_v2](https://huggingface.co/SuvajitGB/rollercoaster_emotions_v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: meta-llama/Llama-2-7b-chat-hf
layer_range: [0, 32]
- model: SuvajitGB/rollercoaster_emotions_v2
layer_range: [0, 32]
merge_method: slerp
base_model: meta-llama/Llama-2-7b-chat-hf
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ncsgobubble/Llama-7B-rollercoaster_v2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
macadeliccc/Laser-WestLake-2x7b | macadeliccc | 2024-03-04T16:33:44Z | 721 | 4 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-27T18:07:39Z | ---
license: apache-2.0
model-index:
- name: Laser-WestLake-2x7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.27
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/Laser-WestLake-2x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.44
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/Laser-WestLake-2x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.71
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/Laser-WestLake-2x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.25
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/Laser-WestLake-2x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/Laser-WestLake-2x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/Laser-WestLake-2x7b
name: Open LLM Leaderboard
---
# Laser-Westlake-2x7B

This model is a moerge of [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser) and [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
## Process
+ WestLake-7B-v2-laser is the base model and one of the experts rather than utilizing 3 different models.
+ Will attempt to laser the final product as well, but given that the base has already been lasered it may not work out.
+ I have another version using the original, non-lasered WestLake-7B that I will also attempt to laser and report the differences.
# Usage
Usage is the same as the [original WestLake-7B](https://huggingface.co/senseable/WestLake-7B-v2)
## GGUF
Available [here](https://huggingface.co/macadeliccc/Laser-WestLake-2x7b-GGUF)
## Evaluations
<pre>----Benchmark Complete----
2024-01-27 19:12:49
Time taken: 24.3 mins
Prompt Format: ChatML
Model: macadeliccc/Laser-WestLake-2x7b-GGUF
Score (v2): 75.42
Parseable: 171.0
---------------
Batch completed
Time taken: 24.4 mins
---------------
</pre>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__Laser-WestLake-2x7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.00|
|AI2 Reasoning Challenge (25-Shot)|72.27|
|HellaSwag (10-Shot) |88.44|
|MMLU (5-Shot) |64.71|
|TruthfulQA (0-shot) |69.25|
|Winogrande (5-shot) |85.79|
|GSM8k (5-shot) |63.53|
|
rhasspy/faster-whisper-base-int8 | rhasspy | 2024-03-10T18:45:38Z | 721 | 1 | transformers | [
"transformers",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T18:45:09Z | ---
license: mit
---
|
Azazelle/Mocha-Dare-7b-ex | Azazelle | 2024-03-23T05:31:20Z | 721 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:akjindal53244/Mistral-7B-v0.1-Open-Platypus",
"base_model:WizardLM/WizardMath-7B-V1.1",
"base_model:mistralai/Mistral-7B-v0.1",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-23T04:42:31Z | ---
pipeline_tag: text-generation
base_model:
- Open-Orca/Mistral-7B-OpenOrca
- akjindal53244/Mistral-7B-v0.1-Open-Platypus
- WizardLM/WizardMath-7B-V1.1
- mistralai/Mistral-7B-v0.1
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-4.0
---
# Mocha-Dare-7b-ex
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
* [akjindal53244/Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Open-Orca/Mistral-7B-OpenOrca
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: akjindal53244/Mistral-7B-v0.1-Open-Platypus
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: WizardLM/WizardMath-7B-V1.1
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: float16
```
|
yyh0901/lloma_step400 | yyh0901 | 2024-04-06T11:40:09Z | 721 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-06T08:52:34Z | ---
license: apache-2.0
pipeline_tag: text-generation
---
"_name_or_path": "/data/yyh/model/models--meta-llama--Llama-2-7b-hf",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 65536,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 16.0,
"type": "dynamic"
},
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.39.2",
"use_cache": true,
"vocab_size": 32000
Enlonged context length, trained 400 steps on LLaMa-2-7b |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.